Test Report: KVM_Linux_crio 19479

                    
                      913baf54a454bfbef3be1ea09a51779f85ec9369:2024-08-19:35854
                    
                

Test fail (31/312)

Order failed test Duration
34 TestAddons/parallel/Ingress 152.9
36 TestAddons/parallel/MetricsServer 317.32
45 TestAddons/StoppedEnableDisable 154.28
117 TestFunctional/parallel/ImageCommands/ImageListShort 2.3
164 TestMultiControlPlane/serial/StopSecondaryNode 141.84
166 TestMultiControlPlane/serial/RestartSecondaryNode 51.7
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 378.37
171 TestMultiControlPlane/serial/StopCluster 141.73
231 TestMultiNode/serial/RestartKeepsNodes 322.61
233 TestMultiNode/serial/StopMultiNode 141.38
240 TestPreload 170.97
248 TestKubernetesUpgrade 390.37
285 TestPause/serial/SecondStartNoReconfiguration 53.05
316 TestStartStop/group/old-k8s-version/serial/FirstStart 296.56
339 TestStartStop/group/no-preload/serial/Stop 139.49
347 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.08
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
357 TestStartStop/group/old-k8s-version/serial/DeployApp 0.5
358 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 92.37
360 TestStartStop/group/embed-certs/serial/Stop 139.17
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
365 TestStartStop/group/old-k8s-version/serial/SecondStart 699.43
366 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.39
368 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.22
369 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.23
370 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.22
371 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.48
372 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 460.02
373 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 530.24
374 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 303.6
375 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 165.63
x
+
TestAddons/parallel/Ingress (152.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-442145 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-442145 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-442145 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9bfb1b5c-61d0-4012-be13-db86ac5187f6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9bfb1b5c-61d0-4012-be13-db86ac5187f6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004419975s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-442145 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.530043285s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-442145 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.65
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-442145 addons disable ingress-dns --alsologtostderr -v=1: (1.719691587s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-442145 addons disable ingress --alsologtostderr -v=1: (7.700654801s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-442145 -n addons-442145
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-442145 logs -n 25: (1.174512744s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-919545                                                                     | download-only-919545 | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC | 19 Aug 24 11:56 UTC |
	| delete  | -p download-only-096642                                                                     | download-only-096642 | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC | 19 Aug 24 11:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-869957 | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC |                     |
	|         | binary-mirror-869957                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44967                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-869957                                                                     | binary-mirror-869957 | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC | 19 Aug 24 11:56 UTC |
	| addons  | enable dashboard -p                                                                         | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC |                     |
	|         | addons-442145                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC |                     |
	|         | addons-442145                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-442145 --wait=true                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC | 19 Aug 24 11:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-442145 ssh cat                                                                       | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | /opt/local-path-provisioner/pvc-6db4af90-c59c-4e3b-9fe3-ca012679ce85_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | -p addons-442145                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | addons-442145                                                                               |                      |         |         |                     |                     |
	| ip      | addons-442145 ip                                                                            | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | addons-442145                                                                               |                      |         |         |                     |                     |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | -p addons-442145                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-442145 ssh curl -s                                                                   | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-442145 addons                                                                        | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-442145 addons                                                                        | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-442145 ip                                                                            | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 12:02 UTC | 19 Aug 24 12:02 UTC |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 12:02 UTC | 19 Aug 24 12:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 12:02 UTC | 19 Aug 24 12:02 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:56:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:56:52.483806 2094815 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:56:52.484097 2094815 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:52.484107 2094815 out.go:358] Setting ErrFile to fd 2...
	I0819 11:56:52.484112 2094815 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:52.484368 2094815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 11:56:52.484996 2094815 out.go:352] Setting JSON to false
	I0819 11:56:52.485972 2094815 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":160762,"bootTime":1723907850,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:56:52.486036 2094815 start.go:139] virtualization: kvm guest
	I0819 11:56:52.487954 2094815 out.go:177] * [addons-442145] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 11:56:52.489283 2094815 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 11:56:52.489311 2094815 notify.go:220] Checking for updates...
	I0819 11:56:52.491715 2094815 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:56:52.493446 2094815 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 11:56:52.494788 2094815 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 11:56:52.496147 2094815 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 11:56:52.497502 2094815 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:56:52.498737 2094815 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:56:52.531754 2094815 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 11:56:52.532860 2094815 start.go:297] selected driver: kvm2
	I0819 11:56:52.532884 2094815 start.go:901] validating driver "kvm2" against <nil>
	I0819 11:56:52.532897 2094815 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:56:52.533635 2094815 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:52.533730 2094815 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 11:56:52.549142 2094815 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 11:56:52.549198 2094815 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:56:52.549421 2094815 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:56:52.549453 2094815 cni.go:84] Creating CNI manager for ""
	I0819 11:56:52.549465 2094815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 11:56:52.549473 2094815 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:56:52.549530 2094815 start.go:340] cluster config:
	{Name:addons-442145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-442145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:56:52.549625 2094815 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:52.551235 2094815 out.go:177] * Starting "addons-442145" primary control-plane node in "addons-442145" cluster
	I0819 11:56:52.552105 2094815 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:56:52.552135 2094815 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 11:56:52.552145 2094815 cache.go:56] Caching tarball of preloaded images
	I0819 11:56:52.552259 2094815 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 11:56:52.552273 2094815 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 11:56:52.552573 2094815 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/config.json ...
	I0819 11:56:52.552594 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/config.json: {Name:mk47493314c6c24cda5c36b266fbefee8b948291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:56:52.552752 2094815 start.go:360] acquireMachinesLock for addons-442145: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:56:52.552819 2094815 start.go:364] duration metric: took 48.351µs to acquireMachinesLock for "addons-442145"
	I0819 11:56:52.552843 2094815 start.go:93] Provisioning new machine with config: &{Name:addons-442145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-442145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:56:52.552902 2094815 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 11:56:52.554257 2094815 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0819 11:56:52.554400 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:56:52.554447 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:56:52.568830 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0819 11:56:52.569273 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:56:52.569810 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:56:52.569838 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:56:52.570191 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:56:52.570358 2094815 main.go:141] libmachine: (addons-442145) Calling .GetMachineName
	I0819 11:56:52.570512 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:56:52.570647 2094815 start.go:159] libmachine.API.Create for "addons-442145" (driver="kvm2")
	I0819 11:56:52.570673 2094815 client.go:168] LocalClient.Create starting
	I0819 11:56:52.570719 2094815 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem
	I0819 11:56:52.902299 2094815 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem
	I0819 11:56:53.108326 2094815 main.go:141] libmachine: Running pre-create checks...
	I0819 11:56:53.108365 2094815 main.go:141] libmachine: (addons-442145) Calling .PreCreateCheck
	I0819 11:56:53.108967 2094815 main.go:141] libmachine: (addons-442145) Calling .GetConfigRaw
	I0819 11:56:53.109462 2094815 main.go:141] libmachine: Creating machine...
	I0819 11:56:53.109476 2094815 main.go:141] libmachine: (addons-442145) Calling .Create
	I0819 11:56:53.109658 2094815 main.go:141] libmachine: (addons-442145) Creating KVM machine...
	I0819 11:56:53.111190 2094815 main.go:141] libmachine: (addons-442145) DBG | found existing default KVM network
	I0819 11:56:53.112057 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:53.111911 2094837 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0819 11:56:53.112128 2094815 main.go:141] libmachine: (addons-442145) DBG | created network xml: 
	I0819 11:56:53.112158 2094815 main.go:141] libmachine: (addons-442145) DBG | <network>
	I0819 11:56:53.112187 2094815 main.go:141] libmachine: (addons-442145) DBG |   <name>mk-addons-442145</name>
	I0819 11:56:53.112198 2094815 main.go:141] libmachine: (addons-442145) DBG |   <dns enable='no'/>
	I0819 11:56:53.112223 2094815 main.go:141] libmachine: (addons-442145) DBG |   
	I0819 11:56:53.112253 2094815 main.go:141] libmachine: (addons-442145) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 11:56:53.112265 2094815 main.go:141] libmachine: (addons-442145) DBG |     <dhcp>
	I0819 11:56:53.112274 2094815 main.go:141] libmachine: (addons-442145) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 11:56:53.112285 2094815 main.go:141] libmachine: (addons-442145) DBG |     </dhcp>
	I0819 11:56:53.112296 2094815 main.go:141] libmachine: (addons-442145) DBG |   </ip>
	I0819 11:56:53.112311 2094815 main.go:141] libmachine: (addons-442145) DBG |   
	I0819 11:56:53.112321 2094815 main.go:141] libmachine: (addons-442145) DBG | </network>
	I0819 11:56:53.112332 2094815 main.go:141] libmachine: (addons-442145) DBG | 
	I0819 11:56:53.117383 2094815 main.go:141] libmachine: (addons-442145) DBG | trying to create private KVM network mk-addons-442145 192.168.39.0/24...
	I0819 11:56:53.187667 2094815 main.go:141] libmachine: (addons-442145) DBG | private KVM network mk-addons-442145 192.168.39.0/24 created
	I0819 11:56:53.187721 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:53.187609 2094837 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 11:56:53.187751 2094815 main.go:141] libmachine: (addons-442145) Setting up store path in /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145 ...
	I0819 11:56:53.187781 2094815 main.go:141] libmachine: (addons-442145) Building disk image from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 11:56:53.187797 2094815 main.go:141] libmachine: (addons-442145) Downloading /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:56:53.491523 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:53.491390 2094837 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa...
	I0819 11:56:53.803136 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:53.803003 2094837 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/addons-442145.rawdisk...
	I0819 11:56:53.803168 2094815 main.go:141] libmachine: (addons-442145) DBG | Writing magic tar header
	I0819 11:56:53.803178 2094815 main.go:141] libmachine: (addons-442145) DBG | Writing SSH key tar header
	I0819 11:56:53.803186 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:53.803132 2094837 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145 ...
	I0819 11:56:53.803319 2094815 main.go:141] libmachine: (addons-442145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145
	I0819 11:56:53.803351 2094815 main.go:141] libmachine: (addons-442145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines
	I0819 11:56:53.803366 2094815 main.go:141] libmachine: (addons-442145) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145 (perms=drwx------)
	I0819 11:56:53.803380 2094815 main.go:141] libmachine: (addons-442145) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines (perms=drwxr-xr-x)
	I0819 11:56:53.803386 2094815 main.go:141] libmachine: (addons-442145) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube (perms=drwxr-xr-x)
	I0819 11:56:53.803396 2094815 main.go:141] libmachine: (addons-442145) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935 (perms=drwxrwxr-x)
	I0819 11:56:53.803402 2094815 main.go:141] libmachine: (addons-442145) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 11:56:53.803413 2094815 main.go:141] libmachine: (addons-442145) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 11:56:53.803421 2094815 main.go:141] libmachine: (addons-442145) Creating domain...
	I0819 11:56:53.803432 2094815 main.go:141] libmachine: (addons-442145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 11:56:53.803445 2094815 main.go:141] libmachine: (addons-442145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935
	I0819 11:56:53.803454 2094815 main.go:141] libmachine: (addons-442145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 11:56:53.803464 2094815 main.go:141] libmachine: (addons-442145) DBG | Checking permissions on dir: /home/jenkins
	I0819 11:56:53.803473 2094815 main.go:141] libmachine: (addons-442145) DBG | Checking permissions on dir: /home
	I0819 11:56:53.803488 2094815 main.go:141] libmachine: (addons-442145) DBG | Skipping /home - not owner
	I0819 11:56:53.804674 2094815 main.go:141] libmachine: (addons-442145) define libvirt domain using xml: 
	I0819 11:56:53.804693 2094815 main.go:141] libmachine: (addons-442145) <domain type='kvm'>
	I0819 11:56:53.804699 2094815 main.go:141] libmachine: (addons-442145)   <name>addons-442145</name>
	I0819 11:56:53.804705 2094815 main.go:141] libmachine: (addons-442145)   <memory unit='MiB'>4000</memory>
	I0819 11:56:53.804714 2094815 main.go:141] libmachine: (addons-442145)   <vcpu>2</vcpu>
	I0819 11:56:53.804720 2094815 main.go:141] libmachine: (addons-442145)   <features>
	I0819 11:56:53.804728 2094815 main.go:141] libmachine: (addons-442145)     <acpi/>
	I0819 11:56:53.804734 2094815 main.go:141] libmachine: (addons-442145)     <apic/>
	I0819 11:56:53.804741 2094815 main.go:141] libmachine: (addons-442145)     <pae/>
	I0819 11:56:53.804754 2094815 main.go:141] libmachine: (addons-442145)     
	I0819 11:56:53.804763 2094815 main.go:141] libmachine: (addons-442145)   </features>
	I0819 11:56:53.804772 2094815 main.go:141] libmachine: (addons-442145)   <cpu mode='host-passthrough'>
	I0819 11:56:53.804779 2094815 main.go:141] libmachine: (addons-442145)   
	I0819 11:56:53.804788 2094815 main.go:141] libmachine: (addons-442145)   </cpu>
	I0819 11:56:53.804794 2094815 main.go:141] libmachine: (addons-442145)   <os>
	I0819 11:56:53.804799 2094815 main.go:141] libmachine: (addons-442145)     <type>hvm</type>
	I0819 11:56:53.804805 2094815 main.go:141] libmachine: (addons-442145)     <boot dev='cdrom'/>
	I0819 11:56:53.804810 2094815 main.go:141] libmachine: (addons-442145)     <boot dev='hd'/>
	I0819 11:56:53.804816 2094815 main.go:141] libmachine: (addons-442145)     <bootmenu enable='no'/>
	I0819 11:56:53.804822 2094815 main.go:141] libmachine: (addons-442145)   </os>
	I0819 11:56:53.804827 2094815 main.go:141] libmachine: (addons-442145)   <devices>
	I0819 11:56:53.804832 2094815 main.go:141] libmachine: (addons-442145)     <disk type='file' device='cdrom'>
	I0819 11:56:53.804867 2094815 main.go:141] libmachine: (addons-442145)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/boot2docker.iso'/>
	I0819 11:56:53.804886 2094815 main.go:141] libmachine: (addons-442145)       <target dev='hdc' bus='scsi'/>
	I0819 11:56:53.804927 2094815 main.go:141] libmachine: (addons-442145)       <readonly/>
	I0819 11:56:53.804954 2094815 main.go:141] libmachine: (addons-442145)     </disk>
	I0819 11:56:53.804966 2094815 main.go:141] libmachine: (addons-442145)     <disk type='file' device='disk'>
	I0819 11:56:53.804979 2094815 main.go:141] libmachine: (addons-442145)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 11:56:53.805012 2094815 main.go:141] libmachine: (addons-442145)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/addons-442145.rawdisk'/>
	I0819 11:56:53.805027 2094815 main.go:141] libmachine: (addons-442145)       <target dev='hda' bus='virtio'/>
	I0819 11:56:53.805061 2094815 main.go:141] libmachine: (addons-442145)     </disk>
	I0819 11:56:53.805084 2094815 main.go:141] libmachine: (addons-442145)     <interface type='network'>
	I0819 11:56:53.805096 2094815 main.go:141] libmachine: (addons-442145)       <source network='mk-addons-442145'/>
	I0819 11:56:53.805108 2094815 main.go:141] libmachine: (addons-442145)       <model type='virtio'/>
	I0819 11:56:53.805119 2094815 main.go:141] libmachine: (addons-442145)     </interface>
	I0819 11:56:53.805131 2094815 main.go:141] libmachine: (addons-442145)     <interface type='network'>
	I0819 11:56:53.805144 2094815 main.go:141] libmachine: (addons-442145)       <source network='default'/>
	I0819 11:56:53.805155 2094815 main.go:141] libmachine: (addons-442145)       <model type='virtio'/>
	I0819 11:56:53.805167 2094815 main.go:141] libmachine: (addons-442145)     </interface>
	I0819 11:56:53.805178 2094815 main.go:141] libmachine: (addons-442145)     <serial type='pty'>
	I0819 11:56:53.805201 2094815 main.go:141] libmachine: (addons-442145)       <target port='0'/>
	I0819 11:56:53.805223 2094815 main.go:141] libmachine: (addons-442145)     </serial>
	I0819 11:56:53.805235 2094815 main.go:141] libmachine: (addons-442145)     <console type='pty'>
	I0819 11:56:53.805244 2094815 main.go:141] libmachine: (addons-442145)       <target type='serial' port='0'/>
	I0819 11:56:53.805255 2094815 main.go:141] libmachine: (addons-442145)     </console>
	I0819 11:56:53.805265 2094815 main.go:141] libmachine: (addons-442145)     <rng model='virtio'>
	I0819 11:56:53.805282 2094815 main.go:141] libmachine: (addons-442145)       <backend model='random'>/dev/random</backend>
	I0819 11:56:53.805296 2094815 main.go:141] libmachine: (addons-442145)     </rng>
	I0819 11:56:53.805306 2094815 main.go:141] libmachine: (addons-442145)     
	I0819 11:56:53.805314 2094815 main.go:141] libmachine: (addons-442145)     
	I0819 11:56:53.805325 2094815 main.go:141] libmachine: (addons-442145)   </devices>
	I0819 11:56:53.805337 2094815 main.go:141] libmachine: (addons-442145) </domain>
	I0819 11:56:53.805345 2094815 main.go:141] libmachine: (addons-442145) 
	I0819 11:56:53.809662 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:d1:71:35 in network default
	I0819 11:56:53.810217 2094815 main.go:141] libmachine: (addons-442145) Ensuring networks are active...
	I0819 11:56:53.810240 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:53.810982 2094815 main.go:141] libmachine: (addons-442145) Ensuring network default is active
	I0819 11:56:53.811304 2094815 main.go:141] libmachine: (addons-442145) Ensuring network mk-addons-442145 is active
	I0819 11:56:53.811856 2094815 main.go:141] libmachine: (addons-442145) Getting domain xml...
	I0819 11:56:53.812674 2094815 main.go:141] libmachine: (addons-442145) Creating domain...
	I0819 11:56:55.003250 2094815 main.go:141] libmachine: (addons-442145) Waiting to get IP...
	I0819 11:56:55.004077 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:55.004477 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:55.004498 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:55.004451 2094837 retry.go:31] will retry after 305.688251ms: waiting for machine to come up
	I0819 11:56:55.312281 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:55.312816 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:55.312848 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:55.312764 2094837 retry.go:31] will retry after 256.992875ms: waiting for machine to come up
	I0819 11:56:55.571588 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:55.572365 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:55.572398 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:55.572328 2094837 retry.go:31] will retry after 333.382572ms: waiting for machine to come up
	I0819 11:56:55.906859 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:55.907261 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:55.907302 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:55.907220 2094837 retry.go:31] will retry after 527.606663ms: waiting for machine to come up
	I0819 11:56:56.436945 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:56.437464 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:56.437493 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:56.437407 2094837 retry.go:31] will retry after 696.017259ms: waiting for machine to come up
	I0819 11:56:57.135602 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:57.136128 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:57.136175 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:57.136090 2094837 retry.go:31] will retry after 639.282514ms: waiting for machine to come up
	I0819 11:56:57.776869 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:57.777325 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:57.777355 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:57.777285 2094837 retry.go:31] will retry after 1.177662974s: waiting for machine to come up
	I0819 11:56:58.956745 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:58.957115 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:58.957137 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:58.957079 2094837 retry.go:31] will retry after 1.305160326s: waiting for machine to come up
	I0819 11:57:00.264660 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:00.265092 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:57:00.265125 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:57:00.265041 2094837 retry.go:31] will retry after 1.174424389s: waiting for machine to come up
	I0819 11:57:01.441319 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:01.441690 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:57:01.441717 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:57:01.441661 2094837 retry.go:31] will retry after 1.42161492s: waiting for machine to come up
	I0819 11:57:02.865080 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:02.865511 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:57:02.865547 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:57:02.865452 2094837 retry.go:31] will retry after 1.822850539s: waiting for machine to come up
	I0819 11:57:04.689761 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:04.690195 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:57:04.690215 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:57:04.690175 2094837 retry.go:31] will retry after 2.578005011s: waiting for machine to come up
	I0819 11:57:07.270904 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:07.271284 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:57:07.271305 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:57:07.271269 2094837 retry.go:31] will retry after 3.74788868s: waiting for machine to come up
	I0819 11:57:11.023591 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:11.024020 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:57:11.024047 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:57:11.023971 2094837 retry.go:31] will retry after 4.593179123s: waiting for machine to come up
	I0819 11:57:15.620529 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.621040 2094815 main.go:141] libmachine: (addons-442145) Found IP for machine: 192.168.39.65
	I0819 11:57:15.621064 2094815 main.go:141] libmachine: (addons-442145) Reserving static IP address...
	I0819 11:57:15.621075 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has current primary IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.621390 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find host DHCP lease matching {name: "addons-442145", mac: "52:54:00:9d:f2:1a", ip: "192.168.39.65"} in network mk-addons-442145
	I0819 11:57:15.715288 2094815 main.go:141] libmachine: (addons-442145) DBG | Getting to WaitForSSH function...
	I0819 11:57:15.715323 2094815 main.go:141] libmachine: (addons-442145) Reserved static IP address: 192.168.39.65
	I0819 11:57:15.715336 2094815 main.go:141] libmachine: (addons-442145) Waiting for SSH to be available...
	I0819 11:57:15.718339 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.718867 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:15.718903 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.719084 2094815 main.go:141] libmachine: (addons-442145) DBG | Using SSH client type: external
	I0819 11:57:15.719111 2094815 main.go:141] libmachine: (addons-442145) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa (-rw-------)
	I0819 11:57:15.719150 2094815 main.go:141] libmachine: (addons-442145) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 11:57:15.719165 2094815 main.go:141] libmachine: (addons-442145) DBG | About to run SSH command:
	I0819 11:57:15.719181 2094815 main.go:141] libmachine: (addons-442145) DBG | exit 0
	I0819 11:57:15.844638 2094815 main.go:141] libmachine: (addons-442145) DBG | SSH cmd err, output: <nil>: 
	I0819 11:57:15.844904 2094815 main.go:141] libmachine: (addons-442145) KVM machine creation complete!
	I0819 11:57:15.845224 2094815 main.go:141] libmachine: (addons-442145) Calling .GetConfigRaw
	I0819 11:57:15.847748 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:15.847978 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:15.848203 2094815 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 11:57:15.848225 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:15.849714 2094815 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 11:57:15.849733 2094815 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 11:57:15.849742 2094815 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 11:57:15.849772 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:15.852476 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.852774 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:15.852806 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.853065 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:15.853242 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:15.853413 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:15.853591 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:15.853768 2094815 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:15.853971 2094815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0819 11:57:15.853984 2094815 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 11:57:15.955677 2094815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:57:15.955704 2094815 main.go:141] libmachine: Detecting the provisioner...
	I0819 11:57:15.955712 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:15.958741 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.959105 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:15.959138 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.959313 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:15.959560 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:15.959749 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:15.959900 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:15.960066 2094815 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:15.960294 2094815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0819 11:57:15.960309 2094815 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 11:57:16.064922 2094815 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 11:57:16.065003 2094815 main.go:141] libmachine: found compatible host: buildroot
	I0819 11:57:16.065015 2094815 main.go:141] libmachine: Provisioning with buildroot...
	I0819 11:57:16.065027 2094815 main.go:141] libmachine: (addons-442145) Calling .GetMachineName
	I0819 11:57:16.065298 2094815 buildroot.go:166] provisioning hostname "addons-442145"
	I0819 11:57:16.065335 2094815 main.go:141] libmachine: (addons-442145) Calling .GetMachineName
	I0819 11:57:16.065521 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:16.068249 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.068629 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.068675 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.068784 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:16.068965 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.069116 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.069252 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:16.069396 2094815 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:16.069586 2094815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0819 11:57:16.069601 2094815 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-442145 && echo "addons-442145" | sudo tee /etc/hostname
	I0819 11:57:16.187025 2094815 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-442145
	
	I0819 11:57:16.187072 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:16.189948 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.190310 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.190342 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.190471 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:16.190685 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.190862 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.190990 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:16.191160 2094815 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:16.191411 2094815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0819 11:57:16.191434 2094815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-442145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-442145/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-442145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:57:16.301392 2094815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:57:16.301445 2094815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 11:57:16.301515 2094815 buildroot.go:174] setting up certificates
	I0819 11:57:16.301536 2094815 provision.go:84] configureAuth start
	I0819 11:57:16.301561 2094815 main.go:141] libmachine: (addons-442145) Calling .GetMachineName
	I0819 11:57:16.301906 2094815 main.go:141] libmachine: (addons-442145) Calling .GetIP
	I0819 11:57:16.304975 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.305363 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.305394 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.305592 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:16.307976 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.308392 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.308419 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.308568 2094815 provision.go:143] copyHostCerts
	I0819 11:57:16.308649 2094815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 11:57:16.308786 2094815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 11:57:16.308873 2094815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 11:57:16.308945 2094815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.addons-442145 san=[127.0.0.1 192.168.39.65 addons-442145 localhost minikube]
	I0819 11:57:16.546651 2094815 provision.go:177] copyRemoteCerts
	I0819 11:57:16.546716 2094815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:57:16.546746 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:16.549703 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.550007 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.550029 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.550221 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:16.550429 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.550591 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:16.550754 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:16.637278 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 11:57:16.663873 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 11:57:16.688085 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 11:57:16.712248 2094815 provision.go:87] duration metric: took 410.694168ms to configureAuth
	I0819 11:57:16.712280 2094815 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:57:16.712449 2094815 config.go:182] Loaded profile config "addons-442145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:57:16.712530 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:16.715310 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.715666 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.715687 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.715857 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:16.716079 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.716250 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.716415 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:16.716613 2094815 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:16.716817 2094815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0819 11:57:16.716836 2094815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 11:57:16.968249 2094815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 11:57:16.968281 2094815 main.go:141] libmachine: Checking connection to Docker...
	I0819 11:57:16.968289 2094815 main.go:141] libmachine: (addons-442145) Calling .GetURL
	I0819 11:57:16.969651 2094815 main.go:141] libmachine: (addons-442145) DBG | Using libvirt version 6000000
	I0819 11:57:16.971995 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.972350 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.972379 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.972606 2094815 main.go:141] libmachine: Docker is up and running!
	I0819 11:57:16.972625 2094815 main.go:141] libmachine: Reticulating splines...
	I0819 11:57:16.972632 2094815 client.go:171] duration metric: took 24.40194727s to LocalClient.Create
	I0819 11:57:16.972656 2094815 start.go:167] duration metric: took 24.402010567s to libmachine.API.Create "addons-442145"
	I0819 11:57:16.972667 2094815 start.go:293] postStartSetup for "addons-442145" (driver="kvm2")
	I0819 11:57:16.972676 2094815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:57:16.972712 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:16.972952 2094815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:57:16.972982 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:16.975108 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.975458 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.975484 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.975670 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:16.975829 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.976001 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:16.976109 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:17.058504 2094815 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:57:17.062819 2094815 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 11:57:17.062847 2094815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 11:57:17.062938 2094815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 11:57:17.062978 2094815 start.go:296] duration metric: took 90.295043ms for postStartSetup
	I0819 11:57:17.063027 2094815 main.go:141] libmachine: (addons-442145) Calling .GetConfigRaw
	I0819 11:57:17.063686 2094815 main.go:141] libmachine: (addons-442145) Calling .GetIP
	I0819 11:57:17.066377 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.066777 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:17.066802 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.066982 2094815 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/config.json ...
	I0819 11:57:17.067199 2094815 start.go:128] duration metric: took 24.514284037s to createHost
	I0819 11:57:17.067228 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:17.069466 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.069776 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:17.069801 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.069954 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:17.070143 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:17.070323 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:17.070446 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:17.070633 2094815 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:17.070795 2094815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0819 11:57:17.070806 2094815 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:57:17.172667 2094815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724068637.149629524
	
	I0819 11:57:17.172708 2094815 fix.go:216] guest clock: 1724068637.149629524
	I0819 11:57:17.172722 2094815 fix.go:229] Guest: 2024-08-19 11:57:17.149629524 +0000 UTC Remote: 2024-08-19 11:57:17.067213451 +0000 UTC m=+24.618595762 (delta=82.416073ms)
	I0819 11:57:17.172797 2094815 fix.go:200] guest clock delta is within tolerance: 82.416073ms
	I0819 11:57:17.172806 2094815 start.go:83] releasing machines lock for "addons-442145", held for 24.619973795s
	I0819 11:57:17.172843 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:17.173134 2094815 main.go:141] libmachine: (addons-442145) Calling .GetIP
	I0819 11:57:17.175770 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.176153 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:17.176204 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.176336 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:17.176864 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:17.177050 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:17.177132 2094815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:57:17.177217 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:17.177245 2094815 ssh_runner.go:195] Run: cat /version.json
	I0819 11:57:17.177266 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:17.179785 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.179931 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.180140 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:17.180179 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.180293 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:17.180315 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:17.180317 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.180491 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:17.180505 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:17.180673 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:17.180698 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:17.180826 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:17.180827 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:17.180954 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:17.284189 2094815 ssh_runner.go:195] Run: systemctl --version
	I0819 11:57:17.290157 2094815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 11:57:17.455135 2094815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:57:17.461485 2094815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:57:17.461548 2094815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 11:57:17.480132 2094815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 11:57:17.480154 2094815 start.go:495] detecting cgroup driver to use...
	I0819 11:57:17.480214 2094815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:57:17.500432 2094815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:57:17.515964 2094815 docker.go:217] disabling cri-docker service (if available) ...
	I0819 11:57:17.516013 2094815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 11:57:17.529514 2094815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 11:57:17.544663 2094815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 11:57:17.680445 2094815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 11:57:17.837786 2094815 docker.go:233] disabling docker service ...
	I0819 11:57:17.837863 2094815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 11:57:17.851884 2094815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 11:57:17.864472 2094815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 11:57:17.994507 2094815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 11:57:18.125073 2094815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 11:57:18.138335 2094815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:57:18.156461 2094815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 11:57:18.156531 2094815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:18.167173 2094815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 11:57:18.167227 2094815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:18.177819 2094815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:18.187852 2094815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:18.197614 2094815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:57:18.207875 2094815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:18.217992 2094815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:18.234018 2094815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:18.244002 2094815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:57:18.253243 2094815 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 11:57:18.253290 2094815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 11:57:18.266364 2094815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:57:18.275671 2094815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:57:18.394221 2094815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 11:57:18.531268 2094815 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 11:57:18.531371 2094815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 11:57:18.536187 2094815 start.go:563] Will wait 60s for crictl version
	I0819 11:57:18.536260 2094815 ssh_runner.go:195] Run: which crictl
	I0819 11:57:18.539847 2094815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:57:18.577401 2094815 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 11:57:18.577543 2094815 ssh_runner.go:195] Run: crio --version
	I0819 11:57:18.603938 2094815 ssh_runner.go:195] Run: crio --version
	I0819 11:57:18.631705 2094815 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 11:57:18.632884 2094815 main.go:141] libmachine: (addons-442145) Calling .GetIP
	I0819 11:57:18.635593 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:18.635969 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:18.635994 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:18.636362 2094815 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 11:57:18.640151 2094815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:57:18.652107 2094815 kubeadm.go:883] updating cluster {Name:addons-442145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-442145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 11:57:18.652246 2094815 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:57:18.652307 2094815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:57:18.683452 2094815 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 11:57:18.683529 2094815 ssh_runner.go:195] Run: which lz4
	I0819 11:57:18.687583 2094815 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 11:57:18.691636 2094815 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 11:57:18.691665 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 11:57:19.943994 2094815 crio.go:462] duration metric: took 1.256445821s to copy over tarball
	I0819 11:57:19.944102 2094815 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 11:57:22.023125 2094815 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.078969951s)
	I0819 11:57:22.023161 2094815 crio.go:469] duration metric: took 2.079130592s to extract the tarball
	I0819 11:57:22.023169 2094815 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 11:57:22.061657 2094815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:57:22.104809 2094815 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 11:57:22.104837 2094815 cache_images.go:84] Images are preloaded, skipping loading
	I0819 11:57:22.104846 2094815 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.31.0 crio true true} ...
	I0819 11:57:22.104968 2094815 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-442145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-442145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:57:22.105039 2094815 ssh_runner.go:195] Run: crio config
	I0819 11:57:22.152875 2094815 cni.go:84] Creating CNI manager for ""
	I0819 11:57:22.152899 2094815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 11:57:22.152910 2094815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 11:57:22.152936 2094815 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-442145 NodeName:addons-442145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 11:57:22.153097 2094815 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-442145"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 11:57:22.153177 2094815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 11:57:22.163740 2094815 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 11:57:22.163839 2094815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 11:57:22.173014 2094815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0819 11:57:22.189055 2094815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:57:22.204442 2094815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0819 11:57:22.220010 2094815 ssh_runner.go:195] Run: grep 192.168.39.65	control-plane.minikube.internal$ /etc/hosts
	I0819 11:57:22.223740 2094815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:57:22.235569 2094815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:57:22.357514 2094815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:57:22.375348 2094815 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145 for IP: 192.168.39.65
	I0819 11:57:22.375379 2094815 certs.go:194] generating shared ca certs ...
	I0819 11:57:22.375403 2094815 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:22.375605 2094815 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 11:57:22.971448 2094815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt ...
	I0819 11:57:22.971487 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt: {Name:mka0297c7b1857162b4d15fa199e14a0b441de45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:22.971811 2094815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key ...
	I0819 11:57:22.971839 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key: {Name:mka4e1d0997125aeae0f1f57e795b0c6e3edc1f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:22.971968 2094815 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 11:57:23.193860 2094815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt ...
	I0819 11:57:23.193894 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt: {Name:mk2f58fe458c077be1f44279e6df9b0b93ca2f1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:23.194102 2094815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key ...
	I0819 11:57:23.194123 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key: {Name:mkb9e5a393787f71234dac42cae557aec592d278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:23.194233 2094815 certs.go:256] generating profile certs ...
	I0819 11:57:23.194324 2094815 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.key
	I0819 11:57:23.194359 2094815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt with IP's: []
	I0819 11:57:23.476492 2094815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt ...
	I0819 11:57:23.476530 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: {Name:mkb3e0f02ffff71610e3eeb94d54caada6d612bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:23.476742 2094815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.key ...
	I0819 11:57:23.476759 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.key: {Name:mk98be8b936ae078011d638e96490d70fa4cd538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:23.476864 2094815 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.key.c064630f
	I0819 11:57:23.476886 2094815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.crt.c064630f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65]
	I0819 11:57:23.657327 2094815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.crt.c064630f ...
	I0819 11:57:23.657371 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.crt.c064630f: {Name:mk8e9b79a8c57b80cbe867fdf3f5be9e1482b1aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:23.657604 2094815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.key.c064630f ...
	I0819 11:57:23.657627 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.key.c064630f: {Name:mk426718edcae2a1adb428729b94abd73c838576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:23.657748 2094815 certs.go:381] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.crt.c064630f -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.crt
	I0819 11:57:23.657866 2094815 certs.go:385] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.key.c064630f -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.key
	I0819 11:57:23.657950 2094815 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.key
	I0819 11:57:23.657981 2094815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.crt with IP's: []
	I0819 11:57:24.005260 2094815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.crt ...
	I0819 11:57:24.005294 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.crt: {Name:mkdd6f0319931e35fc36f2fbe0654c73e6a3aff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:24.005578 2094815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.key ...
	I0819 11:57:24.005615 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.key: {Name:mkece11db20ab97c6baea62eb66ccae006f7d980 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:24.005908 2094815 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:57:24.005957 2094815 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 11:57:24.005995 2094815 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:57:24.006029 2094815 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 11:57:24.006723 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:57:24.049247 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:57:24.094161 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:57:24.119278 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 11:57:24.142385 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 11:57:24.165914 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 11:57:24.189542 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:57:24.213053 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 11:57:24.235959 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:57:24.258911 2094815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 11:57:24.276073 2094815 ssh_runner.go:195] Run: openssl version
	I0819 11:57:24.281972 2094815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:57:24.292844 2094815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:57:24.297232 2094815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:57:24.297307 2094815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:57:24.302971 2094815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:57:24.313731 2094815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:57:24.317680 2094815 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 11:57:24.317732 2094815 kubeadm.go:392] StartCluster: {Name:addons-442145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-442145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:24.317824 2094815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 11:57:24.317908 2094815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 11:57:24.365294 2094815 cri.go:89] found id: ""
	I0819 11:57:24.365370 2094815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 11:57:24.375209 2094815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:57:24.384667 2094815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:57:24.394313 2094815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:57:24.394334 2094815 kubeadm.go:157] found existing configuration files:
	
	I0819 11:57:24.394389 2094815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 11:57:24.403638 2094815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:57:24.403712 2094815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:57:24.413346 2094815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 11:57:24.422769 2094815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:57:24.422847 2094815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:57:24.432137 2094815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 11:57:24.441012 2094815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:57:24.441079 2094815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:57:24.450410 2094815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 11:57:24.459229 2094815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:57:24.459305 2094815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:57:24.468345 2094815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 11:57:24.515669 2094815 kubeadm.go:310] W0819 11:57:24.500205     840 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 11:57:24.516629 2094815 kubeadm.go:310] W0819 11:57:24.501439     840 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 11:57:24.615797 2094815 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 11:57:34.378940 2094815 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 11:57:34.379045 2094815 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 11:57:34.379185 2094815 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 11:57:34.379297 2094815 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 11:57:34.379432 2094815 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 11:57:34.379493 2094815 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 11:57:34.381012 2094815 out.go:235]   - Generating certificates and keys ...
	I0819 11:57:34.381103 2094815 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 11:57:34.381206 2094815 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 11:57:34.381297 2094815 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 11:57:34.381345 2094815 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 11:57:34.381404 2094815 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 11:57:34.381450 2094815 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 11:57:34.381514 2094815 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 11:57:34.381619 2094815 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-442145 localhost] and IPs [192.168.39.65 127.0.0.1 ::1]
	I0819 11:57:34.381696 2094815 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 11:57:34.381842 2094815 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-442145 localhost] and IPs [192.168.39.65 127.0.0.1 ::1]
	I0819 11:57:34.381904 2094815 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 11:57:34.381962 2094815 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 11:57:34.382002 2094815 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 11:57:34.382056 2094815 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 11:57:34.382106 2094815 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 11:57:34.382179 2094815 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 11:57:34.382249 2094815 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 11:57:34.382321 2094815 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 11:57:34.382383 2094815 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 11:57:34.382465 2094815 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 11:57:34.382537 2094815 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 11:57:34.384537 2094815 out.go:235]   - Booting up control plane ...
	I0819 11:57:34.384626 2094815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 11:57:34.384742 2094815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 11:57:34.384840 2094815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 11:57:34.384961 2094815 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 11:57:34.385049 2094815 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 11:57:34.385110 2094815 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 11:57:34.385280 2094815 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 11:57:34.385375 2094815 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 11:57:34.385431 2094815 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.979821ms
	I0819 11:57:34.385499 2094815 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 11:57:34.385554 2094815 kubeadm.go:310] [api-check] The API server is healthy after 5.002073141s
	I0819 11:57:34.385646 2094815 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 11:57:34.385752 2094815 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 11:57:34.385802 2094815 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 11:57:34.385972 2094815 kubeadm.go:310] [mark-control-plane] Marking the node addons-442145 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 11:57:34.386046 2094815 kubeadm.go:310] [bootstrap-token] Using token: 9y2rkr.4iuj11aems6pns8i
	I0819 11:57:34.387475 2094815 out.go:235]   - Configuring RBAC rules ...
	I0819 11:57:34.387608 2094815 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 11:57:34.387722 2094815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 11:57:34.387891 2094815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 11:57:34.388025 2094815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 11:57:34.388182 2094815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 11:57:34.388321 2094815 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 11:57:34.388462 2094815 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 11:57:34.388499 2094815 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 11:57:34.388540 2094815 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 11:57:34.388546 2094815 kubeadm.go:310] 
	I0819 11:57:34.388591 2094815 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 11:57:34.388598 2094815 kubeadm.go:310] 
	I0819 11:57:34.388677 2094815 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 11:57:34.388685 2094815 kubeadm.go:310] 
	I0819 11:57:34.388706 2094815 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 11:57:34.388796 2094815 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 11:57:34.388868 2094815 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 11:57:34.388891 2094815 kubeadm.go:310] 
	I0819 11:57:34.388973 2094815 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 11:57:34.388985 2094815 kubeadm.go:310] 
	I0819 11:57:34.389046 2094815 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 11:57:34.389057 2094815 kubeadm.go:310] 
	I0819 11:57:34.389135 2094815 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 11:57:34.389281 2094815 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 11:57:34.389383 2094815 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 11:57:34.389389 2094815 kubeadm.go:310] 
	I0819 11:57:34.389509 2094815 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 11:57:34.389608 2094815 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 11:57:34.389621 2094815 kubeadm.go:310] 
	I0819 11:57:34.389739 2094815 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9y2rkr.4iuj11aems6pns8i \
	I0819 11:57:34.389890 2094815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 11:57:34.389914 2094815 kubeadm.go:310] 	--control-plane 
	I0819 11:57:34.389920 2094815 kubeadm.go:310] 
	I0819 11:57:34.389995 2094815 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 11:57:34.390001 2094815 kubeadm.go:310] 
	I0819 11:57:34.390067 2094815 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9y2rkr.4iuj11aems6pns8i \
	I0819 11:57:34.390183 2094815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 11:57:34.390196 2094815 cni.go:84] Creating CNI manager for ""
	I0819 11:57:34.390213 2094815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 11:57:34.391725 2094815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 11:57:34.392874 2094815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 11:57:34.404390 2094815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 11:57:34.423587 2094815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 11:57:34.423699 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:34.423728 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-442145 minikube.k8s.io/updated_at=2024_08_19T11_57_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=addons-442145 minikube.k8s.io/primary=true
	I0819 11:57:34.449659 2094815 ops.go:34] apiserver oom_adj: -16
	I0819 11:57:34.529363 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:35.030024 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:35.529541 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:36.030427 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:36.529478 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:37.029725 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:37.529853 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:38.030014 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:38.106439 2094815 kubeadm.go:1113] duration metric: took 3.682830959s to wait for elevateKubeSystemPrivileges
	I0819 11:57:38.106479 2094815 kubeadm.go:394] duration metric: took 13.788753152s to StartCluster
	I0819 11:57:38.106505 2094815 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:38.106653 2094815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 11:57:38.107163 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:38.107390 2094815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 11:57:38.107416 2094815 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:57:38.107502 2094815 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 11:57:38.107604 2094815 config.go:182] Loaded profile config "addons-442145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:57:38.107613 2094815 addons.go:69] Setting yakd=true in profile "addons-442145"
	I0819 11:57:38.107645 2094815 addons.go:234] Setting addon yakd=true in "addons-442145"
	I0819 11:57:38.107649 2094815 addons.go:69] Setting inspektor-gadget=true in profile "addons-442145"
	I0819 11:57:38.107668 2094815 addons.go:69] Setting registry=true in profile "addons-442145"
	I0819 11:57:38.107691 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.107701 2094815 addons.go:69] Setting gcp-auth=true in profile "addons-442145"
	I0819 11:57:38.107725 2094815 addons.go:234] Setting addon inspektor-gadget=true in "addons-442145"
	I0819 11:57:38.107724 2094815 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-442145"
	I0819 11:57:38.107745 2094815 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-442145"
	I0819 11:57:38.107762 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.107764 2094815 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-442145"
	I0819 11:57:38.107743 2094815 mustload.go:65] Loading cluster: addons-442145
	I0819 11:57:38.107803 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.107805 2094815 addons.go:69] Setting metrics-server=true in profile "addons-442145"
	I0819 11:57:38.107829 2094815 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-442145"
	I0819 11:57:38.107841 2094815 addons.go:234] Setting addon metrics-server=true in "addons-442145"
	I0819 11:57:38.107864 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.107971 2094815 addons.go:69] Setting cloud-spanner=true in profile "addons-442145"
	I0819 11:57:38.107992 2094815 config.go:182] Loaded profile config "addons-442145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:57:38.108002 2094815 addons.go:234] Setting addon cloud-spanner=true in "addons-442145"
	I0819 11:57:38.108023 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.108202 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.108210 2094815 addons.go:234] Setting addon registry=true in "addons-442145"
	I0819 11:57:38.108225 2094815 addons.go:69] Setting volcano=true in profile "addons-442145"
	I0819 11:57:38.108233 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108234 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.108246 2094815 addons.go:234] Setting addon volcano=true in "addons-442145"
	I0819 11:57:38.108248 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.107866 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.108267 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.108273 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108342 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.108370 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108389 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.108407 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108577 2094815 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-442145"
	I0819 11:57:38.108621 2094815 addons.go:69] Setting volumesnapshots=true in profile "addons-442145"
	I0819 11:57:38.108651 2094815 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-442145"
	I0819 11:57:38.108655 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.108674 2094815 addons.go:234] Setting addon volumesnapshots=true in "addons-442145"
	I0819 11:57:38.108692 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108708 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.108593 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.108766 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108608 2094815 addons.go:69] Setting default-storageclass=true in profile "addons-442145"
	I0819 11:57:38.108980 2094815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-442145"
	I0819 11:57:38.108630 2094815 addons.go:69] Setting ingress=true in profile "addons-442145"
	I0819 11:57:38.109037 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.109045 2094815 addons.go:234] Setting addon ingress=true in "addons-442145"
	I0819 11:57:38.109057 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.109088 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.109143 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.108203 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.109167 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.109196 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.109376 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.109409 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.109475 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.109495 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108614 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.109696 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108630 2094815 addons.go:69] Setting ingress-dns=true in profile "addons-442145"
	I0819 11:57:38.110281 2094815 addons.go:234] Setting addon ingress-dns=true in "addons-442145"
	I0819 11:57:38.110337 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.107654 2094815 addons.go:69] Setting storage-provisioner=true in profile "addons-442145"
	I0819 11:57:38.110458 2094815 addons.go:234] Setting addon storage-provisioner=true in "addons-442145"
	I0819 11:57:38.110495 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.110709 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.110742 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.110860 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.110902 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108640 2094815 addons.go:69] Setting helm-tiller=true in profile "addons-442145"
	I0819 11:57:38.111053 2094815 addons.go:234] Setting addon helm-tiller=true in "addons-442145"
	I0819 11:57:38.111089 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.108614 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.116267 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.116422 2094815 out.go:177] * Verifying Kubernetes components...
	I0819 11:57:38.117928 2094815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:57:38.129190 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35659
	I0819 11:57:38.129398 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39415
	I0819 11:57:38.129497 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39075
	I0819 11:57:38.129918 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.130036 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.130091 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.130442 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.130462 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.130640 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.130660 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.130782 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.130796 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.130908 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.131000 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.131150 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.131180 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.131580 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.131621 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.131693 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.131714 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.133055 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.133425 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.133476 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.138993 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0819 11:57:38.144130 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35143
	I0819 11:57:38.145515 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.145519 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.145605 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42003
	I0819 11:57:38.146381 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.146401 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.146774 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.146800 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.146838 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.147040 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.147400 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.147461 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.147599 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.147651 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
	I0819 11:57:38.147807 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.148275 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.148192 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.148686 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.148890 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.148930 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.148963 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.149424 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I0819 11:57:38.149878 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.150113 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.150180 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.150331 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.150477 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.150362 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36515
	I0819 11:57:38.150824 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.151253 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.151266 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.151317 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.151958 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.152003 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.156485 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.160645 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.160723 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0819 11:57:38.160731 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.161318 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.161337 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.161562 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.161745 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.162375 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.162424 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.162720 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.162739 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.163107 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.163260 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.164879 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.167307 2094815 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-442145"
	I0819 11:57:38.167358 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.167741 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.167789 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.167892 2094815 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 11:57:38.169077 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 11:57:38.169102 2094815 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 11:57:38.169127 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.172505 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.172977 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.173011 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.173311 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.173527 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.173685 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.173818 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.194814 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I0819 11:57:38.195440 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.196008 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.196028 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.196397 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.196604 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.198597 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38017
	I0819 11:57:38.199936 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.200959 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I0819 11:57:38.201524 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.201970 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 11:57:38.202109 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.202126 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.202197 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0819 11:57:38.202705 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.202784 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.203031 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.203761 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.203779 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.205488 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 11:57:38.206513 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42485
	I0819 11:57:38.206555 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I0819 11:57:38.206573 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.206520 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44227
	I0819 11:57:38.206669 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
	I0819 11:57:38.206739 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.207163 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.207281 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.207660 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.207706 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.207790 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.207807 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.207977 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 11:57:38.208283 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.208282 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.208342 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.208638 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.208662 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.208736 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.209194 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.209231 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.209417 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.209612 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39159
	I0819 11:57:38.209925 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.209954 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.210026 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.210089 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.210195 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.210209 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.210377 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 11:57:38.210626 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0819 11:57:38.210756 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.210799 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.211088 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.211165 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.211326 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.212744 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 11:57:38.213133 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.213148 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.213224 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.213281 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I0819 11:57:38.213435 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.213496 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40925
	I0819 11:57:38.213706 2094815 addons.go:234] Setting addon default-storageclass=true in "addons-442145"
	I0819 11:57:38.213747 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.213995 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.214031 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.214140 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.214174 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.214306 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.214348 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42173
	I0819 11:57:38.214388 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46583
	I0819 11:57:38.214786 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.214810 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.214862 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 11:57:38.214915 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.214884 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.214918 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.215028 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.215159 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.215669 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.215693 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.215830 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.215861 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.216507 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.216525 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.216571 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.216875 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.216968 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 11:57:38.217277 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.217312 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.217316 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.217659 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.218655 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.218680 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.218748 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.219332 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 11:57:38.219392 2094815 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 11:57:38.220047 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.220226 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.220353 2094815 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 11:57:38.220373 2094815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 11:57:38.220395 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.220522 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.220593 2094815 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 11:57:38.220613 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 11:57:38.220649 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.220892 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.222292 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.222310 2094815 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 11:57:38.222331 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.222844 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.223289 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.224531 2094815 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 11:57:38.224942 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.224986 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.225544 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.225583 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.225700 2094815 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 11:57:38.225721 2094815 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 11:57:38.225747 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.225784 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.225786 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.225864 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.225966 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.225996 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.226137 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.226323 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.226479 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.226927 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.227652 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.228340 2094815 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 11:57:38.228348 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.228360 2094815 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 11:57:38.228389 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.229264 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.229408 2094815 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 11:57:38.229778 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.230228 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.230255 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.230408 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.230666 2094815 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 11:57:38.230685 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 11:57:38.230703 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.231274 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.231513 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.231641 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.232224 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.232654 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
	I0819 11:57:38.232839 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.232906 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.232926 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.232958 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.233168 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.233413 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.233820 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.234504 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 11:57:38.235391 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.235595 2094815 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 11:57:38.235617 2094815 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 11:57:38.235636 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.235987 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.236016 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.236201 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.236433 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.236559 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.236693 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.237121 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.237779 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.237801 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.238187 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.238705 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.238748 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.238950 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.239137 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.239171 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.239436 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.239641 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.239856 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.240013 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.241631 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36641
	I0819 11:57:38.242009 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.242508 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.242529 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.242854 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.243001 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.244655 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.246488 2094815 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 11:57:38.247715 2094815 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 11:57:38.248743 2094815 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 11:57:38.249454 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39957
	I0819 11:57:38.249858 2094815 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 11:57:38.249882 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 11:57:38.249905 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.250401 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.250946 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.250971 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.251382 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.251957 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.252006 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.253276 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.253737 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.253766 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.253952 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.254114 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.254243 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.254382 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.256985 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
	I0819 11:57:38.257536 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.258159 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.258187 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.259518 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.259742 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.260928 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I0819 11:57:38.261547 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.262100 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.262119 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.262544 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.262769 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.263302 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34859
	I0819 11:57:38.263460 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0819 11:57:38.264037 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.264121 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.264908 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.264927 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.265036 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.265340 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.265498 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39705
	I0819 11:57:38.265652 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:38.265662 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:38.265865 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:38.265899 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:38.265907 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:38.265916 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:38.265924 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:38.265948 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.266737 2094815 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 11:57:38.267574 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:38.267576 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:38.267586 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:38.267550 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0819 11:57:38.267643 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.267657 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	W0819 11:57:38.267698 2094815 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 11:57:38.268096 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.268099 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.268159 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.268291 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.268415 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.268727 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.268745 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.268807 2094815 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 11:57:38.269157 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.269169 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.269353 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.269558 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.269774 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.270016 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.270079 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.270099 2094815 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 11:57:38.270126 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 11:57:38.270149 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.270693 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.271527 2094815 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0819 11:57:38.272118 2094815 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 11:57:38.272730 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.272957 2094815 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0819 11:57:38.272976 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0819 11:57:38.272994 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.273810 2094815 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 11:57:38.273828 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 11:57:38.273845 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.274109 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.274470 2094815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:57:38.275176 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.275516 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.275551 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.275648 2094815 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 11:57:38.275796 2094815 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:57:38.275811 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 11:57:38.275828 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.275908 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.276132 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.276370 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.276567 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.278057 2094815 out.go:177]   - Using image docker.io/busybox:stable
	I0819 11:57:38.278060 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.278599 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.278636 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.278812 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.279011 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.279034 2094815 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 11:57:38.279047 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 11:57:38.279064 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.279207 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.279274 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.279353 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.279654 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.279979 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.280013 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.280512 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.280545 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.280556 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.280691 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.280861 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.280866 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.281156 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.281204 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.281376 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.281527 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.282567 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.282882 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.282912 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.283051 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.283202 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.283374 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.283499 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.289416 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39125
	I0819 11:57:38.289771 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.290427 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.290445 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.290842 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.291047 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.292615 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.292906 2094815 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 11:57:38.292921 2094815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 11:57:38.292939 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.296091 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.296570 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.296600 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.296813 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.296990 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.297132 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.297261 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	W0819 11:57:38.298172 2094815 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:51710->192.168.39.65:22: read: connection reset by peer
	I0819 11:57:38.298205 2094815 retry.go:31] will retry after 335.547232ms: ssh: handshake failed: read tcp 192.168.39.1:51710->192.168.39.65:22: read: connection reset by peer
	I0819 11:57:38.564068 2094815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:57:38.564283 2094815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 11:57:38.579191 2094815 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 11:57:38.579231 2094815 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 11:57:38.607420 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 11:57:38.674593 2094815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 11:57:38.674622 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 11:57:38.694992 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 11:57:38.708710 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 11:57:38.708746 2094815 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 11:57:38.732723 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 11:57:38.733816 2094815 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 11:57:38.733835 2094815 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 11:57:38.738073 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:57:38.740231 2094815 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 11:57:38.740249 2094815 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 11:57:38.749544 2094815 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 11:57:38.749564 2094815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 11:57:38.884025 2094815 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 11:57:38.884053 2094815 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 11:57:38.893025 2094815 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 11:57:38.893048 2094815 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 11:57:38.896853 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 11:57:38.899665 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 11:57:38.899689 2094815 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 11:57:38.910905 2094815 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0819 11:57:38.910925 2094815 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0819 11:57:38.932460 2094815 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 11:57:38.932481 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 11:57:38.933207 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 11:57:38.940727 2094815 node_ready.go:35] waiting up to 6m0s for node "addons-442145" to be "Ready" ...
	I0819 11:57:38.946122 2094815 node_ready.go:49] node "addons-442145" has status "Ready":"True"
	I0819 11:57:38.946156 2094815 node_ready.go:38] duration metric: took 5.390948ms for node "addons-442145" to be "Ready" ...
	I0819 11:57:38.946170 2094815 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:57:38.957712 2094815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace to be "Ready" ...
	I0819 11:57:39.019988 2094815 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 11:57:39.020030 2094815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 11:57:39.065454 2094815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 11:57:39.065492 2094815 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 11:57:39.127813 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 11:57:39.127849 2094815 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 11:57:39.145074 2094815 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 11:57:39.145104 2094815 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 11:57:39.159222 2094815 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 11:57:39.159251 2094815 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 11:57:39.166327 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 11:57:39.196932 2094815 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 11:57:39.196964 2094815 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0819 11:57:39.215941 2094815 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 11:57:39.215965 2094815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 11:57:39.249770 2094815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 11:57:39.249809 2094815 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 11:57:39.273120 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 11:57:39.273146 2094815 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 11:57:39.316286 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 11:57:39.337741 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 11:57:39.367919 2094815 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 11:57:39.367957 2094815 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 11:57:39.424287 2094815 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 11:57:39.424320 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 11:57:39.431274 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 11:57:39.448963 2094815 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 11:57:39.448997 2094815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 11:57:39.463034 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 11:57:39.463063 2094815 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 11:57:39.633323 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 11:57:39.633351 2094815 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 11:57:39.673127 2094815 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 11:57:39.673163 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 11:57:39.720672 2094815 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 11:57:39.720701 2094815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 11:57:39.731211 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 11:57:39.810511 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 11:57:39.810537 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 11:57:39.940902 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 11:57:39.973636 2094815 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 11:57:39.973661 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 11:57:40.077191 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 11:57:40.271200 2094815 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 11:57:40.271232 2094815 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 11:57:40.557600 2094815 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 11:57:40.557635 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 11:57:40.702057 2094815 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 11:57:40.702090 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 11:57:40.966368 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:40.984458 2094815 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.420132515s)
	I0819 11:57:40.984497 2094815 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 11:57:41.080803 2094815 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 11:57:41.080830 2094815 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 11:57:41.326414 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.71894541s)
	I0819 11:57:41.326467 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:41.326482 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:41.326798 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:41.326848 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:41.326867 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:41.326885 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:41.326896 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:41.327243 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:41.327304 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:41.327326 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:41.393897 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 11:57:41.536655 2094815 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-442145" context rescaled to 1 replicas
	I0819 11:57:42.991192 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:45.221824 2094815 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 11:57:45.221882 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:45.225476 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:45.225970 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:45.226006 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:45.226132 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:45.226411 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:45.226636 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:45.226809 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:45.465460 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:45.780289 2094815 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 11:57:45.867391 2094815 addons.go:234] Setting addon gcp-auth=true in "addons-442145"
	I0819 11:57:45.867479 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:45.867987 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:45.868049 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:45.884933 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0819 11:57:45.885378 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:45.885962 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:45.885986 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:45.886299 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:45.886818 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:45.886865 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:45.902843 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I0819 11:57:45.903305 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:45.903895 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:45.903928 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:45.904330 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:45.904651 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:45.906291 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:45.906575 2094815 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 11:57:45.906606 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:45.909590 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:45.910082 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:45.910112 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:45.910292 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:45.910477 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:45.910694 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:45.910830 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:47.598656 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:47.623655 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.928621766s)
	I0819 11:57:47.623701 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.890939743s)
	I0819 11:57:47.623719 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.623734 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.623752 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.623774 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.623781 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.885681935s)
	I0819 11:57:47.623821 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.623823 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.726942882s)
	I0819 11:57:47.623835 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.623844 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.623853 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.623896 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.690664231s)
	I0819 11:57:47.623923 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.623931 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.457576354s)
	I0819 11:57:47.623951 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.623964 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.623962 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.307649831s)
	I0819 11:57:47.623981 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.623991 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.623935 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.624065 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.286294779s)
	I0819 11:57:47.624087 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.624097 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.624271 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.624292 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.624302 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.624310 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.624437 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.624446 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.624453 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.624460 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.624468 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.624471 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.624477 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.624480 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.624596 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.624639 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.624675 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.624697 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.624713 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.624720 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.193414484s)
	I0819 11:57:47.624752 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.624762 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.624857 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.89361848s)
	I0819 11:57:47.624875 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.624883 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.625003 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.684071119s)
	W0819 11:57:47.625029 2094815 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 11:57:47.625057 2094815 retry.go:31] will retry after 194.789439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 11:57:47.625076 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.625125 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.625133 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.625140 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.625141 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.5479198s)
	I0819 11:57:47.625148 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.625156 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.625166 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.625221 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.625228 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.625236 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.625243 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.626363 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.626395 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.626402 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627076 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.627100 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627097 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.627118 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.627126 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627135 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.627142 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.627159 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627185 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627118 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627199 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627201 2094815 addons.go:475] Verifying addon registry=true in "addons-442145"
	I0819 11:57:47.627221 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.627228 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627329 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627352 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.627359 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627367 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.627374 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.627442 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627464 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.627471 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627689 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627700 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.627725 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627735 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627741 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627749 2094815 addons.go:475] Verifying addon metrics-server=true in "addons-442145"
	I0819 11:57:47.627125 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627934 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.627952 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.628055 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.628079 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.628102 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.628109 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.628766 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.628785 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.628794 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.628802 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.629272 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.629307 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.629314 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.630239 2094815 out.go:177] * Verifying registry addon...
	I0819 11:57:47.630951 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.630969 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.631286 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.631337 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.631348 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.631357 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.631365 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.631372 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.631381 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.631388 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.632684 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.632718 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.632725 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.633174 2094815 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-442145 service yakd-dashboard -n yakd-dashboard
	
	I0819 11:57:47.627312 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.633296 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.633308 2094815 addons.go:475] Verifying addon ingress=true in "addons-442145"
	I0819 11:57:47.633536 2094815 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 11:57:47.634881 2094815 out.go:177] * Verifying ingress addon...
	I0819 11:57:47.636648 2094815 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 11:57:47.649138 2094815 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 11:57:47.649165 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:47.653339 2094815 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 11:57:47.653360 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:47.658324 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.658350 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.658772 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.658791 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	W0819 11:57:47.658885 2094815 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0819 11:57:47.665021 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.665044 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.665360 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.665378 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.820325 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 11:57:48.166204 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:48.166652 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:48.344018 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.950048961s)
	I0819 11:57:48.344081 2094815 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.437459649s)
	I0819 11:57:48.344084 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:48.344208 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:48.344567 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:48.344574 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:48.344585 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:48.344595 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:48.344604 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:48.344838 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:48.344856 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:48.344860 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:48.344884 2094815 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-442145"
	I0819 11:57:48.345577 2094815 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 11:57:48.346461 2094815 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 11:57:48.347603 2094815 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 11:57:48.348382 2094815 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 11:57:48.348598 2094815 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 11:57:48.348620 2094815 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 11:57:48.378480 2094815 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 11:57:48.378517 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:48.580069 2094815 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 11:57:48.580102 2094815 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 11:57:48.637966 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:48.642188 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:48.655162 2094815 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 11:57:48.655188 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 11:57:48.774774 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 11:57:48.853534 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:49.140047 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:49.240327 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:49.353142 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:49.637006 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:49.639846 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:49.853086 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:49.964136 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:50.145614 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:50.146017 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:50.329243 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.508853221s)
	I0819 11:57:50.329321 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:50.329340 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:50.329756 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:50.329786 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:50.329797 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:50.329806 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:50.329766 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:50.330093 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:50.330113 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:50.330134 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:50.364522 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:50.584208 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.809381965s)
	I0819 11:57:50.584279 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:50.584297 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:50.584674 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:50.584698 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:50.584708 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:50.584718 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:50.584723 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:50.584980 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:50.585015 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:50.585048 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:50.586908 2094815 addons.go:475] Verifying addon gcp-auth=true in "addons-442145"
	I0819 11:57:50.588489 2094815 out.go:177] * Verifying gcp-auth addon...
	I0819 11:57:50.590190 2094815 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 11:57:50.609746 2094815 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 11:57:50.609769 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:50.640527 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:50.643751 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:50.858111 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:51.093986 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:51.138443 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:51.143223 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:51.352789 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:51.594043 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:51.637071 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:51.640320 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:51.853455 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:52.093947 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:52.137867 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:52.140323 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:52.353717 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:52.464245 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:52.594299 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:52.695908 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:52.696258 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:52.853458 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:53.095533 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:53.136865 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:53.140325 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:53.353233 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:53.593604 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:53.638307 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:53.642048 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:53.853475 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:54.093993 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:54.136907 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:54.140068 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:54.491105 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:54.495246 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:54.594121 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:54.636797 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:54.639578 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:54.857063 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:55.094735 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:55.141055 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:55.143270 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:55.352924 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:55.594118 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:55.637064 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:55.641668 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:55.854374 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:55.963336 2094815 pod_ready.go:98] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.65 HostIPs:[{IP:192.168.39.
65}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-19 11:57:38 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-19 11:57:43 +0000 UTC,FinishedAt:2024-08-19 11:57:53 +0000 UTC,ContainerID:cri-o://a35c8d14866a8f8298c448a5ee1a23aa7269b037856b07b0680d73586a6cc927,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://a35c8d14866a8f8298c448a5ee1a23aa7269b037856b07b0680d73586a6cc927 Started:0xc0018b53d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0020ef1c0} {Name:kube-api-access-jq8lx MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0020ef1d0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0819 11:57:55.963371 2094815 pod_ready.go:82] duration metric: took 17.005625305s for pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace to be "Ready" ...
	E0819 11:57:55.963386 2094815 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.65 HostIPs:[{IP:192.168.39.65}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-19 11:57:38 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-19 11:57:43 +0000 UTC,FinishedAt:2024-08-19 11:57:53 +0000 UTC,ContainerID:cri-o://a35c8d14866a8f8298c448a5ee1a23aa7269b037856b07b0680d73586a6cc927,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://a35c8d14866a8f8298c448a5ee1a23aa7269b037856b07b0680d73586a6cc927 Started:0xc0018b53d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0020ef1c0} {Name:kube-api-access-jq8lx MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0020ef1d0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0819 11:57:55.963395 2094815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace to be "Ready" ...
	I0819 11:57:56.094411 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:56.138077 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:56.140319 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:56.353796 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:56.597824 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:56.637894 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:56.640254 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:56.852869 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:57.094099 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:57.136601 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:57.140152 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:57.353148 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:57.594173 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:57.637534 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:57.640816 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:57.853849 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:57.971045 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:58.094801 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:58.139940 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:58.141737 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:58.354906 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:58.775818 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:58.775940 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:58.776498 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:58.853083 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:59.093939 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:59.137209 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:59.140022 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:59.352743 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:59.593714 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:59.637495 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:59.639551 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:59.853767 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:59.971309 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:00.094542 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:00.137874 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:00.140603 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:00.353302 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:00.594446 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:00.637970 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:00.640276 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:00.853003 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:01.094294 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:01.137373 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:01.140748 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:01.353486 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:01.594402 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:01.637831 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:01.640110 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:01.855495 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:02.093852 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:02.137989 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:02.140352 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:02.352868 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:02.470936 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:02.594358 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:03.025727 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:03.026100 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:03.027289 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:03.094118 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:03.138571 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:03.141373 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:03.355523 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:03.594421 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:03.637439 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:03.640091 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:03.861158 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:04.094272 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:04.137399 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:04.139673 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:04.353341 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:04.594353 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:04.637299 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:04.641121 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:04.853728 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:04.971380 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:05.095448 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:05.139595 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:05.141445 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:05.353113 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:05.593290 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:05.637181 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:05.640330 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:05.853264 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:06.095553 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:06.137146 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:06.140379 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:06.352998 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:06.878222 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:06.878271 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:06.879690 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:06.881838 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:07.094818 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:07.138110 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:07.140770 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:07.353454 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:07.470785 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:07.594318 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:07.637406 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:07.639617 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:07.853013 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:08.094014 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:08.137894 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:08.144517 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:08.353977 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:08.593933 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:08.638220 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:08.640851 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:08.854277 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:09.099197 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:09.136953 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:09.140127 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:09.354238 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:09.594242 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:09.636998 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:09.640153 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:09.853396 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:09.970028 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:10.094591 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:10.137566 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:10.139803 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:10.353216 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:10.593776 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:10.638093 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:10.641061 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:10.854121 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:11.541221 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:11.541368 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:11.542525 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:11.543386 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:11.593710 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:11.637591 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:11.639939 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:11.852964 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:11.971656 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:12.093938 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:12.137117 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:12.140775 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:12.352699 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:12.593726 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:12.637535 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:12.639864 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:12.853713 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:13.094088 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:13.137378 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:13.139555 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:13.352916 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:13.594273 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:13.637023 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:13.640823 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:13.853692 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:13.974025 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:14.094605 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:14.138158 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:14.140899 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:14.353923 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:14.594546 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:14.637808 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:14.640733 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:14.853690 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:15.096280 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:15.137918 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:15.140696 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:15.353035 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:15.594558 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:15.637694 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:15.640463 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:15.853959 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:15.969911 2094815 pod_ready.go:93] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:15.969942 2094815 pod_ready.go:82] duration metric: took 20.006535331s for pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.969954 2094815 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.978120 2094815 pod_ready.go:93] pod "etcd-addons-442145" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:15.978147 2094815 pod_ready.go:82] duration metric: took 8.186119ms for pod "etcd-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.978160 2094815 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.982466 2094815 pod_ready.go:93] pod "kube-apiserver-addons-442145" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:15.982496 2094815 pod_ready.go:82] duration metric: took 4.327594ms for pod "kube-apiserver-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.982510 2094815 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.986599 2094815 pod_ready.go:93] pod "kube-controller-manager-addons-442145" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:15.986618 2094815 pod_ready.go:82] duration metric: took 4.101224ms for pod "kube-controller-manager-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.986627 2094815 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rpzwc" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.991490 2094815 pod_ready.go:93] pod "kube-proxy-rpzwc" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:15.991512 2094815 pod_ready.go:82] duration metric: took 4.879831ms for pod "kube-proxy-rpzwc" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.991520 2094815 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:16.093912 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:16.136646 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:16.140089 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:16.354308 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:16.368601 2094815 pod_ready.go:93] pod "kube-scheduler-addons-442145" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:16.368629 2094815 pod_ready.go:82] duration metric: took 377.100554ms for pod "kube-scheduler-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:16.368639 2094815 pod_ready.go:39] duration metric: took 37.422453666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:58:16.368662 2094815 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:58:16.368747 2094815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:58:16.460514 2094815 api_server.go:72] duration metric: took 38.353055398s to wait for apiserver process to appear ...
	I0819 11:58:16.460552 2094815 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:58:16.460577 2094815 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0819 11:58:16.466907 2094815 api_server.go:279] https://192.168.39.65:8443/healthz returned 200:
	ok
	I0819 11:58:16.467997 2094815 api_server.go:141] control plane version: v1.31.0
	I0819 11:58:16.468026 2094815 api_server.go:131] duration metric: took 7.465803ms to wait for apiserver health ...
	I0819 11:58:16.468037 2094815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 11:58:16.573081 2094815 system_pods.go:59] 18 kube-system pods found
	I0819 11:58:16.573118 2094815 system_pods.go:61] "coredns-6f6b679f8f-tn2xl" [0ffe02ab-9e5f-4a7a-80c5-e214a48d181e] Running
	I0819 11:58:16.573129 2094815 system_pods.go:61] "csi-hostpath-attacher-0" [4122db6b-7f03-45b6-89af-8209c1a35139] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 11:58:16.573137 2094815 system_pods.go:61] "csi-hostpath-resizer-0" [630e5c60-6c07-4f54-b32b-be622b85793c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 11:58:16.573149 2094815 system_pods.go:61] "csi-hostpathplugin-gt5g4" [0f9a92dd-39ae-422b-8da7-0150875f812f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 11:58:16.573155 2094815 system_pods.go:61] "etcd-addons-442145" [79fa472f-51e6-454e-b9de-841cf410f533] Running
	I0819 11:58:16.573162 2094815 system_pods.go:61] "kube-apiserver-addons-442145" [898b56ec-651a-47e3-bd20-752d4e558c84] Running
	I0819 11:58:16.573169 2094815 system_pods.go:61] "kube-controller-manager-addons-442145" [163525c5-c7fd-42db-a25d-ab69549efbcb] Running
	I0819 11:58:16.573179 2094815 system_pods.go:61] "kube-ingress-dns-minikube" [fc2cbf6d-d587-477e-ba35-4d0322e45461] Running
	I0819 11:58:16.573187 2094815 system_pods.go:61] "kube-proxy-rpzwc" [f58e40e6-11d0-415a-a1d4-96237cba48e7] Running
	I0819 11:58:16.573196 2094815 system_pods.go:61] "kube-scheduler-addons-442145" [3cc958d7-4f5d-4e0e-b28a-d7e289cf35cd] Running
	I0819 11:58:16.573203 2094815 system_pods.go:61] "metrics-server-8988944d9-6pvxf" [19869c82-91f3-46fc-b815-fa4ff03b5433] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 11:58:16.573212 2094815 system_pods.go:61] "nvidia-device-plugin-daemonset-mtvp4" [94c95122-aa0e-4340-8546-8a6b1539fa00] Running
	I0819 11:58:16.573222 2094815 system_pods.go:61] "registry-6fb4cdfc84-dqqsf" [2c0f4d4e-c475-4bda-a9c5-8958f5c9f2f7] Running
	I0819 11:58:16.573230 2094815 system_pods.go:61] "registry-proxy-kfv85" [56d07c49-4b74-4b80-94c5-73b9277814b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 11:58:16.573243 2094815 system_pods.go:61] "snapshot-controller-56fcc65765-hlm68" [1b328062-23c0-468b-a880-b428645ecba3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 11:58:16.573254 2094815 system_pods.go:61] "snapshot-controller-56fcc65765-jmhhj" [686b9a12-4c81-4594-992c-42a552cac282] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 11:58:16.573261 2094815 system_pods.go:61] "storage-provisioner" [62a5e8a3-5947-4048-92af-651cde360d04] Running
	I0819 11:58:16.573276 2094815 system_pods.go:61] "tiller-deploy-b48cc5f79-97zl7" [246084b2-d891-47a5-b87d-17cbe04e45e7] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0819 11:58:16.573287 2094815 system_pods.go:74] duration metric: took 105.242799ms to wait for pod list to return data ...
	I0819 11:58:16.573301 2094815 default_sa.go:34] waiting for default service account to be created ...
	I0819 11:58:16.593396 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:16.637188 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:16.639999 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:16.767365 2094815 default_sa.go:45] found service account: "default"
	I0819 11:58:16.767395 2094815 default_sa.go:55] duration metric: took 194.084902ms for default service account to be created ...
	I0819 11:58:16.767405 2094815 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 11:58:16.853120 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:16.973876 2094815 system_pods.go:86] 18 kube-system pods found
	I0819 11:58:16.973911 2094815 system_pods.go:89] "coredns-6f6b679f8f-tn2xl" [0ffe02ab-9e5f-4a7a-80c5-e214a48d181e] Running
	I0819 11:58:16.973920 2094815 system_pods.go:89] "csi-hostpath-attacher-0" [4122db6b-7f03-45b6-89af-8209c1a35139] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 11:58:16.973926 2094815 system_pods.go:89] "csi-hostpath-resizer-0" [630e5c60-6c07-4f54-b32b-be622b85793c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 11:58:16.973934 2094815 system_pods.go:89] "csi-hostpathplugin-gt5g4" [0f9a92dd-39ae-422b-8da7-0150875f812f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 11:58:16.973940 2094815 system_pods.go:89] "etcd-addons-442145" [79fa472f-51e6-454e-b9de-841cf410f533] Running
	I0819 11:58:16.973947 2094815 system_pods.go:89] "kube-apiserver-addons-442145" [898b56ec-651a-47e3-bd20-752d4e558c84] Running
	I0819 11:58:16.973952 2094815 system_pods.go:89] "kube-controller-manager-addons-442145" [163525c5-c7fd-42db-a25d-ab69549efbcb] Running
	I0819 11:58:16.973964 2094815 system_pods.go:89] "kube-ingress-dns-minikube" [fc2cbf6d-d587-477e-ba35-4d0322e45461] Running
	I0819 11:58:16.973969 2094815 system_pods.go:89] "kube-proxy-rpzwc" [f58e40e6-11d0-415a-a1d4-96237cba48e7] Running
	I0819 11:58:16.973975 2094815 system_pods.go:89] "kube-scheduler-addons-442145" [3cc958d7-4f5d-4e0e-b28a-d7e289cf35cd] Running
	I0819 11:58:16.973982 2094815 system_pods.go:89] "metrics-server-8988944d9-6pvxf" [19869c82-91f3-46fc-b815-fa4ff03b5433] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 11:58:16.973990 2094815 system_pods.go:89] "nvidia-device-plugin-daemonset-mtvp4" [94c95122-aa0e-4340-8546-8a6b1539fa00] Running
	I0819 11:58:16.973994 2094815 system_pods.go:89] "registry-6fb4cdfc84-dqqsf" [2c0f4d4e-c475-4bda-a9c5-8958f5c9f2f7] Running
	I0819 11:58:16.973999 2094815 system_pods.go:89] "registry-proxy-kfv85" [56d07c49-4b74-4b80-94c5-73b9277814b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 11:58:16.974006 2094815 system_pods.go:89] "snapshot-controller-56fcc65765-hlm68" [1b328062-23c0-468b-a880-b428645ecba3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 11:58:16.974011 2094815 system_pods.go:89] "snapshot-controller-56fcc65765-jmhhj" [686b9a12-4c81-4594-992c-42a552cac282] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 11:58:16.974016 2094815 system_pods.go:89] "storage-provisioner" [62a5e8a3-5947-4048-92af-651cde360d04] Running
	I0819 11:58:16.974021 2094815 system_pods.go:89] "tiller-deploy-b48cc5f79-97zl7" [246084b2-d891-47a5-b87d-17cbe04e45e7] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0819 11:58:16.974029 2094815 system_pods.go:126] duration metric: took 206.617837ms to wait for k8s-apps to be running ...
	I0819 11:58:16.974036 2094815 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 11:58:16.974093 2094815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:58:17.009363 2094815 system_svc.go:56] duration metric: took 35.314678ms WaitForService to wait for kubelet
	I0819 11:58:17.009399 2094815 kubeadm.go:582] duration metric: took 38.901946923s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:58:17.009430 2094815 node_conditions.go:102] verifying NodePressure condition ...
	I0819 11:58:17.094576 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:17.137918 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:17.140418 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:17.169066 2094815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 11:58:17.169110 2094815 node_conditions.go:123] node cpu capacity is 2
	I0819 11:58:17.169128 2094815 node_conditions.go:105] duration metric: took 159.690252ms to run NodePressure ...
	I0819 11:58:17.169145 2094815 start.go:241] waiting for startup goroutines ...
	I0819 11:58:17.354545 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:17.593984 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:17.638284 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:17.642853 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:17.853675 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:18.094105 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:18.137111 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:18.140537 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:18.353635 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:18.594646 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:18.637480 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:18.639574 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:18.853608 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:19.095960 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:19.197171 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:19.197789 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:19.354262 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:19.594039 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:19.637124 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:19.640461 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:19.853507 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:20.404943 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:20.405724 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:20.406428 2094815 kapi.go:107] duration metric: took 32.772889386s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 11:58:20.407477 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:20.594563 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:20.640730 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:20.853361 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:21.094323 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:21.140930 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:21.355131 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:21.600519 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:21.640854 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:21.854798 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:22.096097 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:22.144894 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:22.354624 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:22.594801 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:22.697342 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:22.853743 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:23.093505 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:23.149752 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:23.353821 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:23.595739 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:23.643907 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:23.855945 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:24.094034 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:24.143131 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:24.353394 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:24.594471 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:24.649890 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:24.858760 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:25.461887 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:25.462514 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:25.462566 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:25.594065 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:25.695544 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:25.853178 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:26.093451 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:26.140784 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:26.353123 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:26.594008 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:26.642652 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:26.853172 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:27.093789 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:27.140957 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:27.354037 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:27.595509 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:27.696482 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:27.852691 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:28.094782 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:28.141522 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:28.353550 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:28.594353 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:28.641095 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:28.852895 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:29.096189 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:29.143004 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:29.354218 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:29.594184 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:29.696213 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:29.852223 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:30.094934 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:30.142405 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:30.353579 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:30.594110 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:30.641369 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:30.853631 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:31.096402 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:31.140599 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:31.353152 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:31.593504 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:31.641601 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:31.852929 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:32.093870 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:32.141428 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:32.358430 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:32.599881 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:32.701750 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:32.853785 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:33.094736 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:33.141669 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:33.353736 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:34.164735 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:34.165596 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:34.173271 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:34.180264 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:34.192116 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:34.353327 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:34.596010 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:34.640526 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:34.853651 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:35.094182 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:35.140898 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:35.353604 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:35.594564 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:35.640870 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:35.857021 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:36.097832 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:36.203288 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:36.352813 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:36.594595 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:36.649973 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:36.853438 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:37.096147 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:37.143887 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:37.353384 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:37.594391 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:37.641092 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:37.853567 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:38.100919 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:38.140623 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:38.353435 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:38.595056 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:38.640638 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:38.853470 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:39.094891 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:39.141419 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:39.355393 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:39.594840 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:39.641167 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:39.852774 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:40.095041 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:40.141644 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:40.352888 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:40.594439 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:40.640818 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:40.853968 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:41.094665 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:41.141129 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:41.353032 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:41.595156 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:41.641369 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:41.853102 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:42.094483 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:42.141271 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:42.352991 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:42.594736 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:42.696049 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:42.854041 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:43.094837 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:43.142032 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:43.353744 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:43.595396 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:43.696975 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:43.854135 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:44.095311 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:44.141414 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:44.353704 2094815 kapi.go:107] duration metric: took 56.005316428s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 11:58:44.594889 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:44.641668 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:45.094232 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:45.141101 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:45.594310 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:45.640845 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:46.093354 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:46.140749 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:46.593687 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:46.645357 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:47.094495 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:47.141107 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:47.594101 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:47.640816 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:48.095083 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:48.140416 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:48.594936 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:48.641389 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:49.095009 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:49.140222 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:49.594634 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:49.641500 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:50.094377 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:50.141526 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:50.594046 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:50.640398 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:51.094382 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:51.142823 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:51.597805 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:51.641727 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:52.093484 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:52.141122 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:52.594564 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:52.641873 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:53.093427 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:53.141716 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:53.594394 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:53.641062 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:54.094178 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:54.140681 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:54.593990 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:54.640387 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:55.417223 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:55.421240 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:55.594573 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:55.640739 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:56.094157 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:56.140593 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:56.593872 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:56.643400 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:57.094625 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:57.141149 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:57.594805 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:57.642192 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:58.093807 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:58.141294 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:58.602225 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:58.643932 2094815 kapi.go:107] duration metric: took 1m11.007282094s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 11:58:59.102748 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:59.596788 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:00.102758 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:00.602957 2094815 kapi.go:107] duration metric: took 1m10.012763218s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 11:59:00.604208 2094815 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-442145 cluster.
	I0819 11:59:00.605407 2094815 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 11:59:00.606528 2094815 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 11:59:00.607773 2094815 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, metrics-server, nvidia-device-plugin, helm-tiller, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0819 11:59:00.609006 2094815 addons.go:510] duration metric: took 1m22.501496602s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns metrics-server nvidia-device-plugin helm-tiller inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0819 11:59:00.609084 2094815 start.go:246] waiting for cluster config update ...
	I0819 11:59:00.609113 2094815 start.go:255] writing updated cluster config ...
	I0819 11:59:00.609484 2094815 ssh_runner.go:195] Run: rm -f paused
	I0819 11:59:00.661156 2094815 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 11:59:00.662835 2094815 out.go:177] * Done! kubectl is now configured to use "addons-442145" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.176990829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724068944176966157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fbdef49-f0ec-4a0e-8028-bc1794ccdd0f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.177692166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1653ce18-e1a5-4e81-9f16-14b1fd75c6f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.177767028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1653ce18-e1a5-4e81-9f16-14b1fd75c6f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.178096653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a3367af52e1de9c99d1d9007f2f0d6eb293ccda3680455409baaa87ecabd017,PodSandboxId:1bc402b81fe5d458f728107331afb61d4a8f52b9ac6aee03f5df3dfd7e558403,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724068935000085416,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qnwt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6391c36c-7b77-4f53-8ee0-cb3bc71b5fb5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184c68c6e0f298f91f63b1eff3858b144fc02c728b578023e974c0f0735a9f1a,PodSandboxId:bc503bdf726dccf0d706dab1775e487f367552ff2e44a3887decfa09b53dbddc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724068795462759078,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bfb1b5c-61d0-4012-be13-db86ac5187f6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eaebe7e8a45474c7befe65b1d83fc8496d65e4e69a7670e1bbfb5f32692a1f8,PodSandboxId:0b05ccb41acf1463d69f0ba979b166924a2020e1c2a2a37ae7a4cc6f69a62901,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724068743769117881,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d584d27-474c-49b5-a
b65-6b144518d97b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181dd14112582b6fe8ee0940820c0660d6d21dee763c480c75c6e0a47cf11c1a,PodSandboxId:a0deef411737a36bf26240a2a39a76e9411362c8950aeaadff58697e0d6e5344,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724068710846340622,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7wgcj,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 1636dfa8-dae5-49c2-a0eb-359054f2e3ca,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b14938e9ca487051c43d8721f4a5f86125746a2a194a2da50116e1539410ed8,PodSandboxId:0d1136f1929ebffc4fc8fb30ef97fde94629bc42ad3cfbf524cf8ed3932686af,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724068710723866172,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cdk7v,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 151aecac-5ed0-4849-b8f1-edb18f3e381f,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d6d02f54a98908ec6f45d80711cdf3490f324e830028d14c9ee6693df40c83,PodSandboxId:c2ae895d3ac20bff7e5c663f4b46b77bffabb2f68a4add8b3ed9a66c7dab3885,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724068703464795209,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-8988944d9-6pvxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19869c82-91f3-46fc-b815-fa4ff03b5433,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6254752d6998253fa47eead78cbe0e1514cdf304ccb3a52421bfe650a723734b,PodSandboxId:5a5d7317b47637ebba50faf9e0e92867b187eebc6f0fc83c1fce973f049355e3,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State
:CONTAINER_RUNNING,CreatedAt:1724068692879559122,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-mbpxv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ef0cf298-0354-4a2a-be6d-02396dc53b01,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82b35f9f0260bb94eed8da803708e5a8bcf4e2a0139f2aaab30653e6c10551,PodSandboxId:b9b34dc26097f4dc375901adc7dff37ef1583ef17e7cdf7a3fdf386b29295b0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724068664806319452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62a5e8a3-5947-4048-92af-651cde360d04,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e962e7b4514778fe56d55b14caf1ec77ec713b9a9d62b25d2d19c49851987e,PodSandboxId:6db0b82d8669c4eb85c5a9f7ec2a3b5822d92b43199efb95979dee576507f425,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1724068662298469011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tn2xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffe02ab-9e5f-4a7a-80c5-e214a48d181e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201fc6e23ba9a5c4eb11c2f9cf4447c01e3c9afda6c8115421dab1ffff9b71ec,PodSandboxId:63129ec0d0012c821f8ea28f54198d6e8241075007e58f29456105397eb4143e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724068659954672034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rpzwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f58e40e6-11d0-415a-a1d4-96237cba48e7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e89f25b767633cfed93e529e540d5bafb5fbacb44e3dc7a21ef6f9742909f2,PodSandboxId:1f381cb328e5be82b728b1514daa8996e368391019d9df453223a7a1082c5238,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724068648802614329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b2ab71ac6bdb1790527ad33531f67bb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a1084dc30582bb85824ed05e3914b6636b95d7bdaf5f86a0c84e2d6487da08,PodSandboxId:9492061e9db4c65f58eddf8a62389f1f873ffac8873f03aa02d1c0cc82a2ed45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724068648771980537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedbd85d441e3e176dabeeab6fe5167a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6da52004917069ff1a6f3951bcf805ea3b922d5a4b4a01d584c1ae4353ea4a2,PodSandboxId:71d2a964740076660a0312a496bd256bb0c454e06b5f1a6494bc4b9c97a3b673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724068648790338401,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4b177f9a48b550e434d5193f4cd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0fd6bc35b7149a82129e12c8bfd32cdc28aa80b4906cde08861b6f07cd9365,PodSandboxId:94722e7e7accd80402bbfcf861800d95ee77c3f09dbe49410946d2dc4ae5fe44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724068648681924705,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73b959efa116aa64d205ed9688f57c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1653ce18-e1a5-4e81-9f16-14b1fd75c6f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.219987195Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1123309c-f060-4fbc-bfb2-d3d6791d78d0 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.220074486Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1123309c-f060-4fbc-bfb2-d3d6791d78d0 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.221455771Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2defd3cb-90ca-46e7-ba53-2b7d805d8af0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.222715534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724068944222688623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2defd3cb-90ca-46e7-ba53-2b7d805d8af0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.223494827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=534058b9-963c-4d95-a1e3-1e895a012fac name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.223558200Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=534058b9-963c-4d95-a1e3-1e895a012fac name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.223840973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a3367af52e1de9c99d1d9007f2f0d6eb293ccda3680455409baaa87ecabd017,PodSandboxId:1bc402b81fe5d458f728107331afb61d4a8f52b9ac6aee03f5df3dfd7e558403,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724068935000085416,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qnwt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6391c36c-7b77-4f53-8ee0-cb3bc71b5fb5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184c68c6e0f298f91f63b1eff3858b144fc02c728b578023e974c0f0735a9f1a,PodSandboxId:bc503bdf726dccf0d706dab1775e487f367552ff2e44a3887decfa09b53dbddc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724068795462759078,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bfb1b5c-61d0-4012-be13-db86ac5187f6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eaebe7e8a45474c7befe65b1d83fc8496d65e4e69a7670e1bbfb5f32692a1f8,PodSandboxId:0b05ccb41acf1463d69f0ba979b166924a2020e1c2a2a37ae7a4cc6f69a62901,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724068743769117881,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d584d27-474c-49b5-a
b65-6b144518d97b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181dd14112582b6fe8ee0940820c0660d6d21dee763c480c75c6e0a47cf11c1a,PodSandboxId:a0deef411737a36bf26240a2a39a76e9411362c8950aeaadff58697e0d6e5344,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724068710846340622,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7wgcj,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 1636dfa8-dae5-49c2-a0eb-359054f2e3ca,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b14938e9ca487051c43d8721f4a5f86125746a2a194a2da50116e1539410ed8,PodSandboxId:0d1136f1929ebffc4fc8fb30ef97fde94629bc42ad3cfbf524cf8ed3932686af,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724068710723866172,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cdk7v,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 151aecac-5ed0-4849-b8f1-edb18f3e381f,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d6d02f54a98908ec6f45d80711cdf3490f324e830028d14c9ee6693df40c83,PodSandboxId:c2ae895d3ac20bff7e5c663f4b46b77bffabb2f68a4add8b3ed9a66c7dab3885,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724068703464795209,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-8988944d9-6pvxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19869c82-91f3-46fc-b815-fa4ff03b5433,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6254752d6998253fa47eead78cbe0e1514cdf304ccb3a52421bfe650a723734b,PodSandboxId:5a5d7317b47637ebba50faf9e0e92867b187eebc6f0fc83c1fce973f049355e3,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State
:CONTAINER_RUNNING,CreatedAt:1724068692879559122,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-mbpxv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ef0cf298-0354-4a2a-be6d-02396dc53b01,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82b35f9f0260bb94eed8da803708e5a8bcf4e2a0139f2aaab30653e6c10551,PodSandboxId:b9b34dc26097f4dc375901adc7dff37ef1583ef17e7cdf7a3fdf386b29295b0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724068664806319452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62a5e8a3-5947-4048-92af-651cde360d04,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e962e7b4514778fe56d55b14caf1ec77ec713b9a9d62b25d2d19c49851987e,PodSandboxId:6db0b82d8669c4eb85c5a9f7ec2a3b5822d92b43199efb95979dee576507f425,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1724068662298469011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tn2xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffe02ab-9e5f-4a7a-80c5-e214a48d181e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201fc6e23ba9a5c4eb11c2f9cf4447c01e3c9afda6c8115421dab1ffff9b71ec,PodSandboxId:63129ec0d0012c821f8ea28f54198d6e8241075007e58f29456105397eb4143e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724068659954672034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rpzwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f58e40e6-11d0-415a-a1d4-96237cba48e7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e89f25b767633cfed93e529e540d5bafb5fbacb44e3dc7a21ef6f9742909f2,PodSandboxId:1f381cb328e5be82b728b1514daa8996e368391019d9df453223a7a1082c5238,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724068648802614329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b2ab71ac6bdb1790527ad33531f67bb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a1084dc30582bb85824ed05e3914b6636b95d7bdaf5f86a0c84e2d6487da08,PodSandboxId:9492061e9db4c65f58eddf8a62389f1f873ffac8873f03aa02d1c0cc82a2ed45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724068648771980537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedbd85d441e3e176dabeeab6fe5167a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6da52004917069ff1a6f3951bcf805ea3b922d5a4b4a01d584c1ae4353ea4a2,PodSandboxId:71d2a964740076660a0312a496bd256bb0c454e06b5f1a6494bc4b9c97a3b673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724068648790338401,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4b177f9a48b550e434d5193f4cd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0fd6bc35b7149a82129e12c8bfd32cdc28aa80b4906cde08861b6f07cd9365,PodSandboxId:94722e7e7accd80402bbfcf861800d95ee77c3f09dbe49410946d2dc4ae5fe44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724068648681924705,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73b959efa116aa64d205ed9688f57c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=534058b9-963c-4d95-a1e3-1e895a012fac name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.262854870Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=441618bd-9fbc-4fbe-b528-b0ba9effe36b name=/runtime.v1.RuntimeService/Version
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.262924575Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=441618bd-9fbc-4fbe-b528-b0ba9effe36b name=/runtime.v1.RuntimeService/Version
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.264020113Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8cd2d69b-1bbe-4a05-bbd4-7b6a5d734987 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.265441241Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724068944265413460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cd2d69b-1bbe-4a05-bbd4-7b6a5d734987 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.266289364Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02cd04c9-cc55-4a7d-8893-c0b21266beea name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.266347950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02cd04c9-cc55-4a7d-8893-c0b21266beea name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.266619845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a3367af52e1de9c99d1d9007f2f0d6eb293ccda3680455409baaa87ecabd017,PodSandboxId:1bc402b81fe5d458f728107331afb61d4a8f52b9ac6aee03f5df3dfd7e558403,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724068935000085416,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qnwt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6391c36c-7b77-4f53-8ee0-cb3bc71b5fb5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184c68c6e0f298f91f63b1eff3858b144fc02c728b578023e974c0f0735a9f1a,PodSandboxId:bc503bdf726dccf0d706dab1775e487f367552ff2e44a3887decfa09b53dbddc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724068795462759078,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bfb1b5c-61d0-4012-be13-db86ac5187f6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eaebe7e8a45474c7befe65b1d83fc8496d65e4e69a7670e1bbfb5f32692a1f8,PodSandboxId:0b05ccb41acf1463d69f0ba979b166924a2020e1c2a2a37ae7a4cc6f69a62901,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724068743769117881,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d584d27-474c-49b5-a
b65-6b144518d97b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181dd14112582b6fe8ee0940820c0660d6d21dee763c480c75c6e0a47cf11c1a,PodSandboxId:a0deef411737a36bf26240a2a39a76e9411362c8950aeaadff58697e0d6e5344,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724068710846340622,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7wgcj,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 1636dfa8-dae5-49c2-a0eb-359054f2e3ca,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b14938e9ca487051c43d8721f4a5f86125746a2a194a2da50116e1539410ed8,PodSandboxId:0d1136f1929ebffc4fc8fb30ef97fde94629bc42ad3cfbf524cf8ed3932686af,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724068710723866172,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cdk7v,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 151aecac-5ed0-4849-b8f1-edb18f3e381f,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d6d02f54a98908ec6f45d80711cdf3490f324e830028d14c9ee6693df40c83,PodSandboxId:c2ae895d3ac20bff7e5c663f4b46b77bffabb2f68a4add8b3ed9a66c7dab3885,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724068703464795209,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-8988944d9-6pvxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19869c82-91f3-46fc-b815-fa4ff03b5433,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6254752d6998253fa47eead78cbe0e1514cdf304ccb3a52421bfe650a723734b,PodSandboxId:5a5d7317b47637ebba50faf9e0e92867b187eebc6f0fc83c1fce973f049355e3,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State
:CONTAINER_RUNNING,CreatedAt:1724068692879559122,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-mbpxv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ef0cf298-0354-4a2a-be6d-02396dc53b01,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82b35f9f0260bb94eed8da803708e5a8bcf4e2a0139f2aaab30653e6c10551,PodSandboxId:b9b34dc26097f4dc375901adc7dff37ef1583ef17e7cdf7a3fdf386b29295b0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724068664806319452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62a5e8a3-5947-4048-92af-651cde360d04,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e962e7b4514778fe56d55b14caf1ec77ec713b9a9d62b25d2d19c49851987e,PodSandboxId:6db0b82d8669c4eb85c5a9f7ec2a3b5822d92b43199efb95979dee576507f425,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1724068662298469011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tn2xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffe02ab-9e5f-4a7a-80c5-e214a48d181e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201fc6e23ba9a5c4eb11c2f9cf4447c01e3c9afda6c8115421dab1ffff9b71ec,PodSandboxId:63129ec0d0012c821f8ea28f54198d6e8241075007e58f29456105397eb4143e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724068659954672034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rpzwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f58e40e6-11d0-415a-a1d4-96237cba48e7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e89f25b767633cfed93e529e540d5bafb5fbacb44e3dc7a21ef6f9742909f2,PodSandboxId:1f381cb328e5be82b728b1514daa8996e368391019d9df453223a7a1082c5238,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724068648802614329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b2ab71ac6bdb1790527ad33531f67bb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a1084dc30582bb85824ed05e3914b6636b95d7bdaf5f86a0c84e2d6487da08,PodSandboxId:9492061e9db4c65f58eddf8a62389f1f873ffac8873f03aa02d1c0cc82a2ed45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724068648771980537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedbd85d441e3e176dabeeab6fe5167a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6da52004917069ff1a6f3951bcf805ea3b922d5a4b4a01d584c1ae4353ea4a2,PodSandboxId:71d2a964740076660a0312a496bd256bb0c454e06b5f1a6494bc4b9c97a3b673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724068648790338401,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4b177f9a48b550e434d5193f4cd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0fd6bc35b7149a82129e12c8bfd32cdc28aa80b4906cde08861b6f07cd9365,PodSandboxId:94722e7e7accd80402bbfcf861800d95ee77c3f09dbe49410946d2dc4ae5fe44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724068648681924705,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73b959efa116aa64d205ed9688f57c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02cd04c9-cc55-4a7d-8893-c0b21266beea name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.300433489Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec9f5cc3-115d-4397-b188-83593897ed4b name=/runtime.v1.RuntimeService/Version
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.300733477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec9f5cc3-115d-4397-b188-83593897ed4b name=/runtime.v1.RuntimeService/Version
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.302086428Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7bcd4ce-a123-45c6-a205-0c07ba9233c1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.303478330Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724068944303453640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7bcd4ce-a123-45c6-a205-0c07ba9233c1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.304087629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12d06840-2b5f-438d-842f-e85f8d5241ef name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.304137358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12d06840-2b5f-438d-842f-e85f8d5241ef name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:02:24 addons-442145 crio[683]: time="2024-08-19 12:02:24.304479846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a3367af52e1de9c99d1d9007f2f0d6eb293ccda3680455409baaa87ecabd017,PodSandboxId:1bc402b81fe5d458f728107331afb61d4a8f52b9ac6aee03f5df3dfd7e558403,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724068935000085416,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qnwt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6391c36c-7b77-4f53-8ee0-cb3bc71b5fb5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184c68c6e0f298f91f63b1eff3858b144fc02c728b578023e974c0f0735a9f1a,PodSandboxId:bc503bdf726dccf0d706dab1775e487f367552ff2e44a3887decfa09b53dbddc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724068795462759078,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bfb1b5c-61d0-4012-be13-db86ac5187f6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eaebe7e8a45474c7befe65b1d83fc8496d65e4e69a7670e1bbfb5f32692a1f8,PodSandboxId:0b05ccb41acf1463d69f0ba979b166924a2020e1c2a2a37ae7a4cc6f69a62901,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724068743769117881,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d584d27-474c-49b5-a
b65-6b144518d97b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181dd14112582b6fe8ee0940820c0660d6d21dee763c480c75c6e0a47cf11c1a,PodSandboxId:a0deef411737a36bf26240a2a39a76e9411362c8950aeaadff58697e0d6e5344,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724068710846340622,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7wgcj,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 1636dfa8-dae5-49c2-a0eb-359054f2e3ca,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b14938e9ca487051c43d8721f4a5f86125746a2a194a2da50116e1539410ed8,PodSandboxId:0d1136f1929ebffc4fc8fb30ef97fde94629bc42ad3cfbf524cf8ed3932686af,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724068710723866172,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cdk7v,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 151aecac-5ed0-4849-b8f1-edb18f3e381f,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d6d02f54a98908ec6f45d80711cdf3490f324e830028d14c9ee6693df40c83,PodSandboxId:c2ae895d3ac20bff7e5c663f4b46b77bffabb2f68a4add8b3ed9a66c7dab3885,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724068703464795209,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-8988944d9-6pvxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19869c82-91f3-46fc-b815-fa4ff03b5433,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6254752d6998253fa47eead78cbe0e1514cdf304ccb3a52421bfe650a723734b,PodSandboxId:5a5d7317b47637ebba50faf9e0e92867b187eebc6f0fc83c1fce973f049355e3,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State
:CONTAINER_RUNNING,CreatedAt:1724068692879559122,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-mbpxv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ef0cf298-0354-4a2a-be6d-02396dc53b01,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82b35f9f0260bb94eed8da803708e5a8bcf4e2a0139f2aaab30653e6c10551,PodSandboxId:b9b34dc26097f4dc375901adc7dff37ef1583ef17e7cdf7a3fdf386b29295b0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724068664806319452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62a5e8a3-5947-4048-92af-651cde360d04,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e962e7b4514778fe56d55b14caf1ec77ec713b9a9d62b25d2d19c49851987e,PodSandboxId:6db0b82d8669c4eb85c5a9f7ec2a3b5822d92b43199efb95979dee576507f425,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1724068662298469011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tn2xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffe02ab-9e5f-4a7a-80c5-e214a48d181e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201fc6e23ba9a5c4eb11c2f9cf4447c01e3c9afda6c8115421dab1ffff9b71ec,PodSandboxId:63129ec0d0012c821f8ea28f54198d6e8241075007e58f29456105397eb4143e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724068659954672034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rpzwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f58e40e6-11d0-415a-a1d4-96237cba48e7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e89f25b767633cfed93e529e540d5bafb5fbacb44e3dc7a21ef6f9742909f2,PodSandboxId:1f381cb328e5be82b728b1514daa8996e368391019d9df453223a7a1082c5238,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724068648802614329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b2ab71ac6bdb1790527ad33531f67bb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a1084dc30582bb85824ed05e3914b6636b95d7bdaf5f86a0c84e2d6487da08,PodSandboxId:9492061e9db4c65f58eddf8a62389f1f873ffac8873f03aa02d1c0cc82a2ed45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724068648771980537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedbd85d441e3e176dabeeab6fe5167a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6da52004917069ff1a6f3951bcf805ea3b922d5a4b4a01d584c1ae4353ea4a2,PodSandboxId:71d2a964740076660a0312a496bd256bb0c454e06b5f1a6494bc4b9c97a3b673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724068648790338401,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4b177f9a48b550e434d5193f4cd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0fd6bc35b7149a82129e12c8bfd32cdc28aa80b4906cde08861b6f07cd9365,PodSandboxId:94722e7e7accd80402bbfcf861800d95ee77c3f09dbe49410946d2dc4ae5fe44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724068648681924705,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73b959efa116aa64d205ed9688f57c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=12d06840-2b5f-438d-842f-e85f8d5241ef name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7a3367af52e1d       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   1bc402b81fe5d       hello-world-app-55bf9c44b4-qnwt7
	184c68c6e0f29       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              2 minutes ago       Running             nginx                     0                   bc503bdf726dc       nginx
	1eaebe7e8a454       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   0b05ccb41acf1       busybox
	181dd14112582       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   3 minutes ago       Exited              patch                     0                   a0deef411737a       ingress-nginx-admission-patch-7wgcj
	9b14938e9ca48       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   3 minutes ago       Exited              create                    0                   0d1136f1929eb       ingress-nginx-admission-create-cdk7v
	e3d6d02f54a98       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   c2ae895d3ac20       metrics-server-8988944d9-6pvxf
	6254752d69982       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   5a5d7317b4763       local-path-provisioner-86d989889c-mbpxv
	6c82b35f9f026       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   b9b34dc26097f       storage-provisioner
	45e962e7b4514       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   6db0b82d8669c       coredns-6f6b679f8f-tn2xl
	201fc6e23ba9a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             4 minutes ago       Running             kube-proxy                0                   63129ec0d0012       kube-proxy-rpzwc
	21e89f25b7676       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             4 minutes ago       Running             etcd                      0                   1f381cb328e5b       etcd-addons-442145
	f6da520049170       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             4 minutes ago       Running             kube-apiserver            0                   71d2a96474007       kube-apiserver-addons-442145
	06a1084dc3058       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             4 minutes ago       Running             kube-scheduler            0                   9492061e9db4c       kube-scheduler-addons-442145
	ac0fd6bc35b71       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             4 minutes ago       Running             kube-controller-manager   0                   94722e7e7accd       kube-controller-manager-addons-442145
	
	
	==> coredns [45e962e7b4514778fe56d55b14caf1ec77ec713b9a9d62b25d2d19c49851987e] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[196058232]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 11:57:43.547) (total time: 30001ms):
	Trace[196058232]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (11:58:13.548)
	Trace[196058232]: [30.001314792s] [30.001314792s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1485178553]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 11:57:43.553) (total time: 30000ms):
	Trace[1485178553]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:58:13.554)
	Trace[1485178553]: [30.00061879s] [30.00061879s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51174 - 43396 "HINFO IN 8104933949717401389.3285066136651019283. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009295986s
	[INFO] 10.244.0.22:34039 - 7240 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000512313s
	[INFO] 10.244.0.22:45891 - 23057 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000213176s
	[INFO] 10.244.0.22:48975 - 6272 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000146151s
	[INFO] 10.244.0.22:48837 - 15818 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084326s
	[INFO] 10.244.0.22:60646 - 64239 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000100267s
	[INFO] 10.244.0.22:45682 - 53888 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000056222s
	[INFO] 10.244.0.22:53667 - 61607 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000599993s
	[INFO] 10.244.0.22:43513 - 37327 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000599423s
	[INFO] 10.244.0.28:59444 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00025949s
	[INFO] 10.244.0.28:48261 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000068857s
	
	
	==> describe nodes <==
	Name:               addons-442145
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-442145
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=addons-442145
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T11_57_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-442145
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:57:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-442145
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:02:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:00:06 +0000   Mon, 19 Aug 2024 11:57:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:00:06 +0000   Mon, 19 Aug 2024 11:57:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:00:06 +0000   Mon, 19 Aug 2024 11:57:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:00:06 +0000   Mon, 19 Aug 2024 11:57:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    addons-442145
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc55cc0793b94af2ad0daa5ec39f8ff5
	  System UUID:                fc55cc07-93b9-4af2-ad0d-aa5ec39f8ff5
	  Boot ID:                    c2eabfee-0247-40b5-b3a1-42dd2b5a4e27
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  default                     hello-world-app-55bf9c44b4-qnwt7           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 coredns-6f6b679f8f-tn2xl                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m46s
	  kube-system                 etcd-addons-442145                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m53s
	  kube-system                 kube-apiserver-addons-442145               250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-controller-manager-addons-442145      200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 kube-proxy-rpzwc                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-scheduler-addons-442145               100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 metrics-server-8988944d9-6pvxf             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m40s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  local-path-storage          local-path-provisioner-86d989889c-mbpxv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m42s                  kube-proxy       
	  Normal  Starting                 4m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m56s (x8 over 4m57s)  kubelet          Node addons-442145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x8 over 4m57s)  kubelet          Node addons-442145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s (x7 over 4m57s)  kubelet          Node addons-442145 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m51s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m51s                  kubelet          Node addons-442145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m51s                  kubelet          Node addons-442145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m51s                  kubelet          Node addons-442145 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m50s                  kubelet          Node addons-442145 status is now: NodeReady
	  Normal  RegisteredNode           4m47s                  node-controller  Node addons-442145 event: Registered Node addons-442145 in Controller
	
	
	==> dmesg <==
	[  +4.729379] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +1.045668] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.079164] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.119117] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.251088] kauditd_printk_skb: 46 callbacks suppressed
	[Aug19 11:58] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.096849] kauditd_printk_skb: 32 callbacks suppressed
	[  +7.116963] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.492986] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.261106] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.148929] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.025079] kauditd_printk_skb: 64 callbacks suppressed
	[ +10.425161] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.024301] kauditd_printk_skb: 15 callbacks suppressed
	[Aug19 11:59] kauditd_printk_skb: 56 callbacks suppressed
	[ +21.161050] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.917815] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.049803] kauditd_printk_skb: 82 callbacks suppressed
	[  +5.270725] kauditd_printk_skb: 22 callbacks suppressed
	[  +7.183051] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.007791] kauditd_printk_skb: 47 callbacks suppressed
	[Aug19 12:00] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.277456] kauditd_printk_skb: 42 callbacks suppressed
	[Aug19 12:02] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.569659] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [21e89f25b767633cfed93e529e540d5bafb5fbacb44e3dc7a21ef6f9742909f2] <==
	{"level":"info","ts":"2024-08-19T11:58:55.405519Z","caller":"traceutil/trace.go:171","msg":"trace[983053068] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1157; }","duration":"276.557046ms","start":"2024-08-19T11:58:55.128950Z","end":"2024-08-19T11:58:55.405507Z","steps":["trace[983053068] 'agreement among raft nodes before linearized reading'  (duration: 274.382506ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:58:55.403550Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"292.525327ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:58:55.405751Z","caller":"traceutil/trace.go:171","msg":"trace[1820371117] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1157; }","duration":"294.668605ms","start":"2024-08-19T11:58:55.111018Z","end":"2024-08-19T11:58:55.405687Z","steps":["trace[1820371117] 'agreement among raft nodes before linearized reading'  (duration: 292.512483ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:59:28.563060Z","caller":"traceutil/trace.go:171","msg":"trace[1562016742] transaction","detail":"{read_only:false; response_revision:1361; number_of_response:1; }","duration":"101.174403ms","start":"2024-08-19T11:59:28.461861Z","end":"2024-08-19T11:59:28.563036Z","steps":["trace[1562016742] 'process raft request'  (duration: 100.874093ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:59:40.786719Z","caller":"traceutil/trace.go:171","msg":"trace[1287512661] linearizableReadLoop","detail":"{readStateIndex:1505; appliedIndex:1504; }","duration":"171.731169ms","start":"2024-08-19T11:59:40.614960Z","end":"2024-08-19T11:59:40.786692Z","steps":["trace[1287512661] 'read index received'  (duration: 171.610324ms)","trace[1287512661] 'applied index is now lower than readState.Index'  (duration: 120.343µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T11:59:40.786959Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.998671ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-08-19T11:59:40.787025Z","caller":"traceutil/trace.go:171","msg":"trace[1326557946] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1457; }","duration":"172.080904ms","start":"2024-08-19T11:59:40.614931Z","end":"2024-08-19T11:59:40.787012Z","steps":["trace[1326557946] 'agreement among raft nodes before linearized reading'  (duration: 171.909653ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:59:40.787026Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.14362ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:59:40.787063Z","caller":"traceutil/trace.go:171","msg":"trace[394561874] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1457; }","duration":"152.18756ms","start":"2024-08-19T11:59:40.634868Z","end":"2024-08-19T11:59:40.787056Z","steps":["trace[394561874] 'agreement among raft nodes before linearized reading'  (duration: 152.128388ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:59:40.787194Z","caller":"traceutil/trace.go:171","msg":"trace[557107817] transaction","detail":"{read_only:false; response_revision:1457; number_of_response:1; }","duration":"266.610427ms","start":"2024-08-19T11:59:40.520573Z","end":"2024-08-19T11:59:40.787184Z","steps":["trace[557107817] 'process raft request'  (duration: 266.040078ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:59:55.070091Z","caller":"traceutil/trace.go:171","msg":"trace[1979777551] linearizableReadLoop","detail":"{readStateIndex:1681; appliedIndex:1680; }","duration":"122.471569ms","start":"2024-08-19T11:59:54.947549Z","end":"2024-08-19T11:59:55.070021Z","steps":["trace[1979777551] 'read index received'  (duration: 122.33759ms)","trace[1979777551] 'applied index is now lower than readState.Index'  (duration: 133.528µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T11:59:55.070427Z","caller":"traceutil/trace.go:171","msg":"trace[1493306816] transaction","detail":"{read_only:false; response_revision:1623; number_of_response:1; }","duration":"163.4653ms","start":"2024-08-19T11:59:54.906948Z","end":"2024-08-19T11:59:55.070413Z","steps":["trace[1493306816] 'process raft request'  (duration: 162.983422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:59:55.070621Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.005189ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:59:55.070642Z","caller":"traceutil/trace.go:171","msg":"trace[1727256531] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1623; }","duration":"123.090561ms","start":"2024-08-19T11:59:54.947545Z","end":"2024-08-19T11:59:55.070636Z","steps":["trace[1727256531] 'agreement among raft nodes before linearized reading'  (duration: 122.990283ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:59:55.070793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.337713ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:59:55.070810Z","caller":"traceutil/trace.go:171","msg":"trace[1889767900] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1623; }","duration":"106.362166ms","start":"2024-08-19T11:59:54.964443Z","end":"2024-08-19T11:59:55.070806Z","steps":["trace[1889767900] 'agreement among raft nodes before linearized reading'  (duration: 106.329518ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T12:00:00.333643Z","caller":"traceutil/trace.go:171","msg":"trace[921500174] transaction","detail":"{read_only:false; response_revision:1683; number_of_response:1; }","duration":"338.908247ms","start":"2024-08-19T11:59:59.994719Z","end":"2024-08-19T12:00:00.333628Z","steps":["trace[921500174] 'process raft request'  (duration: 338.787954ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:00:00.333778Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T11:59:59.994647Z","time spent":"339.053518ms","remote":"127.0.0.1:33356","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":591,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/ingress/default/nginx-ingress\" mod_revision:1594 > success:<request_put:<key:\"/registry/ingress/default/nginx-ingress\" value_size:544 >> failure:<request_range:<key:\"/registry/ingress/default/nginx-ingress\" > >"}
	{"level":"info","ts":"2024-08-19T12:00:00.336335Z","caller":"traceutil/trace.go:171","msg":"trace[203872634] linearizableReadLoop","detail":"{readStateIndex:1743; appliedIndex:1742; }","duration":"223.718948ms","start":"2024-08-19T12:00:00.112603Z","end":"2024-08-19T12:00:00.336322Z","steps":["trace[203872634] 'read index received'  (duration: 221.060936ms)","trace[203872634] 'applied index is now lower than readState.Index'  (duration: 2.657559ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T12:00:00.336608Z","caller":"traceutil/trace.go:171","msg":"trace[1022449697] transaction","detail":"{read_only:false; response_revision:1684; number_of_response:1; }","duration":"228.477598ms","start":"2024-08-19T12:00:00.108121Z","end":"2024-08-19T12:00:00.336599Z","steps":["trace[1022449697] 'process raft request'  (duration: 228.099048ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:00:00.337143Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.524432ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T12:00:00.337287Z","caller":"traceutil/trace.go:171","msg":"trace[341485831] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1684; }","duration":"224.633578ms","start":"2024-08-19T12:00:00.112600Z","end":"2024-08-19T12:00:00.337233Z","steps":["trace[341485831] 'agreement among raft nodes before linearized reading'  (duration: 224.509934ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:00:00.337519Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.290233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T12:00:00.337691Z","caller":"traceutil/trace.go:171","msg":"trace[2055964969] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1684; }","duration":"126.465449ms","start":"2024-08-19T12:00:00.211219Z","end":"2024-08-19T12:00:00.337684Z","steps":["trace[2055964969] 'agreement among raft nodes before linearized reading'  (duration: 126.278854ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T12:00:31.465488Z","caller":"traceutil/trace.go:171","msg":"trace[1356514116] transaction","detail":"{read_only:false; response_revision:1866; number_of_response:1; }","duration":"218.737492ms","start":"2024-08-19T12:00:31.246731Z","end":"2024-08-19T12:00:31.465469Z","steps":["trace[1356514116] 'process raft request'  (duration: 218.616234ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:02:24 up 5 min,  0 users,  load average: 0.40, 1.05, 0.57
	Linux addons-442145 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f6da52004917069ff1a6f3951bcf805ea3b922d5a4b4a01d584c1ae4353ea4a2] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0819 11:59:24.982973       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.126.149:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.126.149:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.126.149:443: connect: connection refused" logger="UnhandledError"
	E0819 11:59:24.984687       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.126.149:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.126.149:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.126.149:443: connect: connection refused" logger="UnhandledError"
	I0819 11:59:25.042595       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0819 11:59:47.699155       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0819 11:59:50.838664       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 11:59:51.869613       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 11:59:52.749665       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 11:59:52.932582       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.136.165"}
	I0819 11:59:57.150761       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.88.125"}
	I0819 12:00:11.618898       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:00:11.618969       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:00:11.649643       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:00:11.650428       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:00:11.660609       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:00:11.660665       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:00:11.684918       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:00:11.685079       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:00:11.729148       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:00:11.729232       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 12:00:12.649514       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 12:00:12.730354       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0819 12:00:12.831614       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0819 12:02:13.804966       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.118.71"}
	
	
	==> kube-controller-manager [ac0fd6bc35b7149a82129e12c8bfd32cdc28aa80b4906cde08861b6f07cd9365] <==
	W0819 12:01:15.025709       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:01:15.025853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:01:22.301696       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:01:22.301761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:01:28.535667       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:01:28.535708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:01:36.004003       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:01:36.004124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:02:07.379336       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:02:07.379577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:02:07.708010       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:02:07.708110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 12:02:13.630346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.678661ms"
	I0819 12:02:13.649975       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="19.073758ms"
	I0819 12:02:13.663044       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.874666ms"
	I0819 12:02:13.663132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="43.923µs"
	I0819 12:02:15.696984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.830658ms"
	I0819 12:02:15.697682       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="34.118µs"
	I0819 12:02:16.325714       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0819 12:02:16.330331       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.98µs"
	I0819 12:02:16.333694       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0819 12:02:19.191684       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:02:19.191876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:02:21.880122       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:02:21.880191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [201fc6e23ba9a5c4eb11c2f9cf4447c01e3c9afda6c8115421dab1ffff9b71ec] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 11:57:41.366678       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 11:57:41.453806       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.65"]
	E0819 11:57:41.453886       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 11:57:42.211390       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 11:57:42.211452       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 11:57:42.211491       1 server_linux.go:169] "Using iptables Proxier"
	I0819 11:57:42.397758       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 11:57:42.398090       1 server.go:483] "Version info" version="v1.31.0"
	I0819 11:57:42.398129       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:57:42.404736       1 config.go:197] "Starting service config controller"
	I0819 11:57:42.404787       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 11:57:42.404809       1 config.go:104] "Starting endpoint slice config controller"
	I0819 11:57:42.404813       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 11:57:42.405208       1 config.go:326] "Starting node config controller"
	I0819 11:57:42.405215       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 11:57:42.505364       1 shared_informer.go:320] Caches are synced for node config
	I0819 11:57:42.505389       1 shared_informer.go:320] Caches are synced for service config
	I0819 11:57:42.505414       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [06a1084dc30582bb85824ed05e3914b6636b95d7bdaf5f86a0c84e2d6487da08] <==
	W0819 11:57:31.809476       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 11:57:31.809537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:31.866827       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 11:57:31.866879       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 11:57:31.946987       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 11:57:31.947043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:31.978012       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 11:57:31.978102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.007049       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 11:57:32.007101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.036779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 11:57:32.036832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.088489       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 11:57:32.088543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.156867       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 11:57:32.156922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.186169       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 11:57:32.186219       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.193420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 11:57:32.193498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.211817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 11:57:32.211869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.315056       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 11:57:32.315125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 11:57:33.581968       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 12:02:14 addons-442145 kubelet[1228]: E0819 12:02:14.021928    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724068934021564227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585117,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:02:14 addons-442145 kubelet[1228]: I0819 12:02:14.885491    1228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7hs7\" (UniqueName: \"kubernetes.io/projected/fc2cbf6d-d587-477e-ba35-4d0322e45461-kube-api-access-g7hs7\") pod \"fc2cbf6d-d587-477e-ba35-4d0322e45461\" (UID: \"fc2cbf6d-d587-477e-ba35-4d0322e45461\") "
	Aug 19 12:02:14 addons-442145 kubelet[1228]: I0819 12:02:14.894530    1228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2cbf6d-d587-477e-ba35-4d0322e45461-kube-api-access-g7hs7" (OuterVolumeSpecName: "kube-api-access-g7hs7") pod "fc2cbf6d-d587-477e-ba35-4d0322e45461" (UID: "fc2cbf6d-d587-477e-ba35-4d0322e45461"). InnerVolumeSpecName "kube-api-access-g7hs7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 12:02:14 addons-442145 kubelet[1228]: I0819 12:02:14.985718    1228 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-g7hs7\" (UniqueName: \"kubernetes.io/projected/fc2cbf6d-d587-477e-ba35-4d0322e45461-kube-api-access-g7hs7\") on node \"addons-442145\" DevicePath \"\""
	Aug 19 12:02:15 addons-442145 kubelet[1228]: I0819 12:02:15.667456    1228 scope.go:117] "RemoveContainer" containerID="64a7e3630fa178ec2a78dff5803dd4e19006557f057fa107db9cccc8766287fe"
	Aug 19 12:02:15 addons-442145 kubelet[1228]: I0819 12:02:15.713218    1228 scope.go:117] "RemoveContainer" containerID="64a7e3630fa178ec2a78dff5803dd4e19006557f057fa107db9cccc8766287fe"
	Aug 19 12:02:15 addons-442145 kubelet[1228]: E0819 12:02:15.714128    1228 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64a7e3630fa178ec2a78dff5803dd4e19006557f057fa107db9cccc8766287fe\": container with ID starting with 64a7e3630fa178ec2a78dff5803dd4e19006557f057fa107db9cccc8766287fe not found: ID does not exist" containerID="64a7e3630fa178ec2a78dff5803dd4e19006557f057fa107db9cccc8766287fe"
	Aug 19 12:02:15 addons-442145 kubelet[1228]: I0819 12:02:15.714189    1228 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a7e3630fa178ec2a78dff5803dd4e19006557f057fa107db9cccc8766287fe"} err="failed to get container status \"64a7e3630fa178ec2a78dff5803dd4e19006557f057fa107db9cccc8766287fe\": rpc error: code = NotFound desc = could not find container \"64a7e3630fa178ec2a78dff5803dd4e19006557f057fa107db9cccc8766287fe\": container with ID starting with 64a7e3630fa178ec2a78dff5803dd4e19006557f057fa107db9cccc8766287fe not found: ID does not exist"
	Aug 19 12:02:15 addons-442145 kubelet[1228]: I0819 12:02:15.727342    1228 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-qnwt7" podStartSLOduration=2.030810796 podStartE2EDuration="2.727319696s" podCreationTimestamp="2024-08-19 12:02:13 +0000 UTC" firstStartedPulling="2024-08-19 12:02:14.28637845 +0000 UTC m=+280.725509838" lastFinishedPulling="2024-08-19 12:02:14.982887351 +0000 UTC m=+281.422018738" observedRunningTime="2024-08-19 12:02:15.687018373 +0000 UTC m=+282.126149765" watchObservedRunningTime="2024-08-19 12:02:15.727319696 +0000 UTC m=+282.166451097"
	Aug 19 12:02:17 addons-442145 kubelet[1228]: I0819 12:02:17.706492    1228 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="151aecac-5ed0-4849-b8f1-edb18f3e381f" path="/var/lib/kubelet/pods/151aecac-5ed0-4849-b8f1-edb18f3e381f/volumes"
	Aug 19 12:02:17 addons-442145 kubelet[1228]: I0819 12:02:17.706910    1228 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1636dfa8-dae5-49c2-a0eb-359054f2e3ca" path="/var/lib/kubelet/pods/1636dfa8-dae5-49c2-a0eb-359054f2e3ca/volumes"
	Aug 19 12:02:17 addons-442145 kubelet[1228]: I0819 12:02:17.707411    1228 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc2cbf6d-d587-477e-ba35-4d0322e45461" path="/var/lib/kubelet/pods/fc2cbf6d-d587-477e-ba35-4d0322e45461/volumes"
	Aug 19 12:02:19 addons-442145 kubelet[1228]: I0819 12:02:19.516994    1228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd3c95fb-9508-4bee-857c-cfbc32b2f9d1-webhook-cert\") pod \"dd3c95fb-9508-4bee-857c-cfbc32b2f9d1\" (UID: \"dd3c95fb-9508-4bee-857c-cfbc32b2f9d1\") "
	Aug 19 12:02:19 addons-442145 kubelet[1228]: I0819 12:02:19.517030    1228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k59pq\" (UniqueName: \"kubernetes.io/projected/dd3c95fb-9508-4bee-857c-cfbc32b2f9d1-kube-api-access-k59pq\") pod \"dd3c95fb-9508-4bee-857c-cfbc32b2f9d1\" (UID: \"dd3c95fb-9508-4bee-857c-cfbc32b2f9d1\") "
	Aug 19 12:02:19 addons-442145 kubelet[1228]: I0819 12:02:19.519451    1228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd3c95fb-9508-4bee-857c-cfbc32b2f9d1-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "dd3c95fb-9508-4bee-857c-cfbc32b2f9d1" (UID: "dd3c95fb-9508-4bee-857c-cfbc32b2f9d1"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 19 12:02:19 addons-442145 kubelet[1228]: I0819 12:02:19.520385    1228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd3c95fb-9508-4bee-857c-cfbc32b2f9d1-kube-api-access-k59pq" (OuterVolumeSpecName: "kube-api-access-k59pq") pod "dd3c95fb-9508-4bee-857c-cfbc32b2f9d1" (UID: "dd3c95fb-9508-4bee-857c-cfbc32b2f9d1"). InnerVolumeSpecName "kube-api-access-k59pq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 12:02:19 addons-442145 kubelet[1228]: I0819 12:02:19.617720    1228 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd3c95fb-9508-4bee-857c-cfbc32b2f9d1-webhook-cert\") on node \"addons-442145\" DevicePath \"\""
	Aug 19 12:02:19 addons-442145 kubelet[1228]: I0819 12:02:19.617749    1228 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-k59pq\" (UniqueName: \"kubernetes.io/projected/dd3c95fb-9508-4bee-857c-cfbc32b2f9d1-kube-api-access-k59pq\") on node \"addons-442145\" DevicePath \"\""
	Aug 19 12:02:19 addons-442145 kubelet[1228]: I0819 12:02:19.691422    1228 scope.go:117] "RemoveContainer" containerID="464879186b51bbef841dc76a37f7b821dd978fabd44a8049a2cff367d608afa6"
	Aug 19 12:02:19 addons-442145 kubelet[1228]: I0819 12:02:19.707806    1228 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd3c95fb-9508-4bee-857c-cfbc32b2f9d1" path="/var/lib/kubelet/pods/dd3c95fb-9508-4bee-857c-cfbc32b2f9d1/volumes"
	Aug 19 12:02:19 addons-442145 kubelet[1228]: I0819 12:02:19.714023    1228 scope.go:117] "RemoveContainer" containerID="464879186b51bbef841dc76a37f7b821dd978fabd44a8049a2cff367d608afa6"
	Aug 19 12:02:19 addons-442145 kubelet[1228]: E0819 12:02:19.716467    1228 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"464879186b51bbef841dc76a37f7b821dd978fabd44a8049a2cff367d608afa6\": container with ID starting with 464879186b51bbef841dc76a37f7b821dd978fabd44a8049a2cff367d608afa6 not found: ID does not exist" containerID="464879186b51bbef841dc76a37f7b821dd978fabd44a8049a2cff367d608afa6"
	Aug 19 12:02:19 addons-442145 kubelet[1228]: I0819 12:02:19.716511    1228 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"464879186b51bbef841dc76a37f7b821dd978fabd44a8049a2cff367d608afa6"} err="failed to get container status \"464879186b51bbef841dc76a37f7b821dd978fabd44a8049a2cff367d608afa6\": rpc error: code = NotFound desc = could not find container \"464879186b51bbef841dc76a37f7b821dd978fabd44a8049a2cff367d608afa6\": container with ID starting with 464879186b51bbef841dc76a37f7b821dd978fabd44a8049a2cff367d608afa6 not found: ID does not exist"
	Aug 19 12:02:24 addons-442145 kubelet[1228]: E0819 12:02:24.025444    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724068944024523815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:02:24 addons-442145 kubelet[1228]: E0819 12:02:24.025600    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724068944024523815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6c82b35f9f0260bb94eed8da803708e5a8bcf4e2a0139f2aaab30653e6c10551] <==
	I0819 11:57:45.993938       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 11:57:46.077537       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 11:57:46.077609       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 11:57:46.162351       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 11:57:46.168605       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-442145_63aeb766-6c92-4616-afe4-7f149d64b057!
	I0819 11:57:46.168723       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"21bb4f94-56c4-42f5-8085-2d508c8df71f", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-442145_63aeb766-6c92-4616-afe4-7f149d64b057 became leader
	I0819 11:57:46.269624       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-442145_63aeb766-6c92-4616-afe4-7f149d64b057!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-442145 -n addons-442145
helpers_test.go:261: (dbg) Run:  kubectl --context addons-442145 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.90s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (317.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.397934ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-6pvxf" [19869c82-91f3-46fc-b815-fa4ff03b5433] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003491987s
addons_test.go:417: (dbg) Run:  kubectl --context addons-442145 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-442145 top pods -n kube-system: exit status 1 (69.15131ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tn2xl, age: 2m9.148797555s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-442145 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-442145 top pods -n kube-system: exit status 1 (70.050788ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tn2xl, age: 2m12.769011354s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-442145 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-442145 top pods -n kube-system: exit status 1 (73.595991ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tn2xl, age: 2m19.480810661s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-442145 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-442145 top pods -n kube-system: exit status 1 (71.708727ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tn2xl, age: 2m24.26224249s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-442145 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-442145 top pods -n kube-system: exit status 1 (65.887162ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tn2xl, age: 2m39.078117393s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-442145 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-442145 top pods -n kube-system: exit status 1 (66.805229ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tn2xl, age: 3m0.011084106s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-442145 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-442145 top pods -n kube-system: exit status 1 (64.17888ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tn2xl, age: 3m21.670168474s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-442145 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-442145 top pods -n kube-system: exit status 1 (65.309401ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tn2xl, age: 4m7.884203548s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-442145 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-442145 top pods -n kube-system: exit status 1 (66.249787ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tn2xl, age: 4m50.025143916s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-442145 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-442145 top pods -n kube-system: exit status 1 (70.750698ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tn2xl, age: 5m54.039054205s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-442145 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-442145 top pods -n kube-system: exit status 1 (65.822817ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tn2xl, age: 6m30.973107104s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-442145 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-442145 top pods -n kube-system: exit status 1 (64.545774ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tn2xl, age: 7m17.877626419s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-442145 -n addons-442145
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-442145 logs -n 25: (1.213997809s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-096642                                                                     | download-only-096642 | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC | 19 Aug 24 11:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-869957 | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC |                     |
	|         | binary-mirror-869957                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44967                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-869957                                                                     | binary-mirror-869957 | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC | 19 Aug 24 11:56 UTC |
	| addons  | enable dashboard -p                                                                         | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC |                     |
	|         | addons-442145                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC |                     |
	|         | addons-442145                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-442145 --wait=true                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC | 19 Aug 24 11:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-442145 ssh cat                                                                       | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | /opt/local-path-provisioner/pvc-6db4af90-c59c-4e3b-9fe3-ca012679ce85_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | -p addons-442145                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | addons-442145                                                                               |                      |         |         |                     |                     |
	| ip      | addons-442145 ip                                                                            | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | addons-442145                                                                               |                      |         |         |                     |                     |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 11:59 UTC | 19 Aug 24 11:59 UTC |
	|         | -p addons-442145                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-442145 ssh curl -s                                                                   | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-442145 addons                                                                        | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-442145 addons                                                                        | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-442145 ip                                                                            | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 12:02 UTC | 19 Aug 24 12:02 UTC |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 12:02 UTC | 19 Aug 24 12:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-442145 addons disable                                                                | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 12:02 UTC | 19 Aug 24 12:02 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-442145 addons                                                                        | addons-442145        | jenkins | v1.33.1 | 19 Aug 24 12:04 UTC | 19 Aug 24 12:04 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:56:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:56:52.483806 2094815 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:56:52.484097 2094815 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:52.484107 2094815 out.go:358] Setting ErrFile to fd 2...
	I0819 11:56:52.484112 2094815 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:52.484368 2094815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 11:56:52.484996 2094815 out.go:352] Setting JSON to false
	I0819 11:56:52.485972 2094815 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":160762,"bootTime":1723907850,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:56:52.486036 2094815 start.go:139] virtualization: kvm guest
	I0819 11:56:52.487954 2094815 out.go:177] * [addons-442145] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 11:56:52.489283 2094815 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 11:56:52.489311 2094815 notify.go:220] Checking for updates...
	I0819 11:56:52.491715 2094815 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:56:52.493446 2094815 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 11:56:52.494788 2094815 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 11:56:52.496147 2094815 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 11:56:52.497502 2094815 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:56:52.498737 2094815 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:56:52.531754 2094815 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 11:56:52.532860 2094815 start.go:297] selected driver: kvm2
	I0819 11:56:52.532884 2094815 start.go:901] validating driver "kvm2" against <nil>
	I0819 11:56:52.532897 2094815 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:56:52.533635 2094815 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:52.533730 2094815 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 11:56:52.549142 2094815 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 11:56:52.549198 2094815 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:56:52.549421 2094815 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:56:52.549453 2094815 cni.go:84] Creating CNI manager for ""
	I0819 11:56:52.549465 2094815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 11:56:52.549473 2094815 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:56:52.549530 2094815 start.go:340] cluster config:
	{Name:addons-442145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-442145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:56:52.549625 2094815 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:52.551235 2094815 out.go:177] * Starting "addons-442145" primary control-plane node in "addons-442145" cluster
	I0819 11:56:52.552105 2094815 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:56:52.552135 2094815 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 11:56:52.552145 2094815 cache.go:56] Caching tarball of preloaded images
	I0819 11:56:52.552259 2094815 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 11:56:52.552273 2094815 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 11:56:52.552573 2094815 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/config.json ...
	I0819 11:56:52.552594 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/config.json: {Name:mk47493314c6c24cda5c36b266fbefee8b948291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:56:52.552752 2094815 start.go:360] acquireMachinesLock for addons-442145: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:56:52.552819 2094815 start.go:364] duration metric: took 48.351µs to acquireMachinesLock for "addons-442145"
	I0819 11:56:52.552843 2094815 start.go:93] Provisioning new machine with config: &{Name:addons-442145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-442145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:56:52.552902 2094815 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 11:56:52.554257 2094815 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0819 11:56:52.554400 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:56:52.554447 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:56:52.568830 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0819 11:56:52.569273 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:56:52.569810 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:56:52.569838 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:56:52.570191 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:56:52.570358 2094815 main.go:141] libmachine: (addons-442145) Calling .GetMachineName
	I0819 11:56:52.570512 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:56:52.570647 2094815 start.go:159] libmachine.API.Create for "addons-442145" (driver="kvm2")
	I0819 11:56:52.570673 2094815 client.go:168] LocalClient.Create starting
	I0819 11:56:52.570719 2094815 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem
	I0819 11:56:52.902299 2094815 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem
	I0819 11:56:53.108326 2094815 main.go:141] libmachine: Running pre-create checks...
	I0819 11:56:53.108365 2094815 main.go:141] libmachine: (addons-442145) Calling .PreCreateCheck
	I0819 11:56:53.108967 2094815 main.go:141] libmachine: (addons-442145) Calling .GetConfigRaw
	I0819 11:56:53.109462 2094815 main.go:141] libmachine: Creating machine...
	I0819 11:56:53.109476 2094815 main.go:141] libmachine: (addons-442145) Calling .Create
	I0819 11:56:53.109658 2094815 main.go:141] libmachine: (addons-442145) Creating KVM machine...
	I0819 11:56:53.111190 2094815 main.go:141] libmachine: (addons-442145) DBG | found existing default KVM network
	I0819 11:56:53.112057 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:53.111911 2094837 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0819 11:56:53.112128 2094815 main.go:141] libmachine: (addons-442145) DBG | created network xml: 
	I0819 11:56:53.112158 2094815 main.go:141] libmachine: (addons-442145) DBG | <network>
	I0819 11:56:53.112187 2094815 main.go:141] libmachine: (addons-442145) DBG |   <name>mk-addons-442145</name>
	I0819 11:56:53.112198 2094815 main.go:141] libmachine: (addons-442145) DBG |   <dns enable='no'/>
	I0819 11:56:53.112223 2094815 main.go:141] libmachine: (addons-442145) DBG |   
	I0819 11:56:53.112253 2094815 main.go:141] libmachine: (addons-442145) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 11:56:53.112265 2094815 main.go:141] libmachine: (addons-442145) DBG |     <dhcp>
	I0819 11:56:53.112274 2094815 main.go:141] libmachine: (addons-442145) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 11:56:53.112285 2094815 main.go:141] libmachine: (addons-442145) DBG |     </dhcp>
	I0819 11:56:53.112296 2094815 main.go:141] libmachine: (addons-442145) DBG |   </ip>
	I0819 11:56:53.112311 2094815 main.go:141] libmachine: (addons-442145) DBG |   
	I0819 11:56:53.112321 2094815 main.go:141] libmachine: (addons-442145) DBG | </network>
	I0819 11:56:53.112332 2094815 main.go:141] libmachine: (addons-442145) DBG | 
	I0819 11:56:53.117383 2094815 main.go:141] libmachine: (addons-442145) DBG | trying to create private KVM network mk-addons-442145 192.168.39.0/24...
	I0819 11:56:53.187667 2094815 main.go:141] libmachine: (addons-442145) DBG | private KVM network mk-addons-442145 192.168.39.0/24 created
	I0819 11:56:53.187721 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:53.187609 2094837 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 11:56:53.187751 2094815 main.go:141] libmachine: (addons-442145) Setting up store path in /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145 ...
	I0819 11:56:53.187781 2094815 main.go:141] libmachine: (addons-442145) Building disk image from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 11:56:53.187797 2094815 main.go:141] libmachine: (addons-442145) Downloading /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:56:53.491523 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:53.491390 2094837 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa...
	I0819 11:56:53.803136 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:53.803003 2094837 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/addons-442145.rawdisk...
	I0819 11:56:53.803168 2094815 main.go:141] libmachine: (addons-442145) DBG | Writing magic tar header
	I0819 11:56:53.803178 2094815 main.go:141] libmachine: (addons-442145) DBG | Writing SSH key tar header
	I0819 11:56:53.803186 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:53.803132 2094837 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145 ...
	I0819 11:56:53.803319 2094815 main.go:141] libmachine: (addons-442145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145
	I0819 11:56:53.803351 2094815 main.go:141] libmachine: (addons-442145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines
	I0819 11:56:53.803366 2094815 main.go:141] libmachine: (addons-442145) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145 (perms=drwx------)
	I0819 11:56:53.803380 2094815 main.go:141] libmachine: (addons-442145) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines (perms=drwxr-xr-x)
	I0819 11:56:53.803386 2094815 main.go:141] libmachine: (addons-442145) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube (perms=drwxr-xr-x)
	I0819 11:56:53.803396 2094815 main.go:141] libmachine: (addons-442145) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935 (perms=drwxrwxr-x)
	I0819 11:56:53.803402 2094815 main.go:141] libmachine: (addons-442145) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 11:56:53.803413 2094815 main.go:141] libmachine: (addons-442145) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 11:56:53.803421 2094815 main.go:141] libmachine: (addons-442145) Creating domain...
	I0819 11:56:53.803432 2094815 main.go:141] libmachine: (addons-442145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 11:56:53.803445 2094815 main.go:141] libmachine: (addons-442145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935
	I0819 11:56:53.803454 2094815 main.go:141] libmachine: (addons-442145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 11:56:53.803464 2094815 main.go:141] libmachine: (addons-442145) DBG | Checking permissions on dir: /home/jenkins
	I0819 11:56:53.803473 2094815 main.go:141] libmachine: (addons-442145) DBG | Checking permissions on dir: /home
	I0819 11:56:53.803488 2094815 main.go:141] libmachine: (addons-442145) DBG | Skipping /home - not owner
	I0819 11:56:53.804674 2094815 main.go:141] libmachine: (addons-442145) define libvirt domain using xml: 
	I0819 11:56:53.804693 2094815 main.go:141] libmachine: (addons-442145) <domain type='kvm'>
	I0819 11:56:53.804699 2094815 main.go:141] libmachine: (addons-442145)   <name>addons-442145</name>
	I0819 11:56:53.804705 2094815 main.go:141] libmachine: (addons-442145)   <memory unit='MiB'>4000</memory>
	I0819 11:56:53.804714 2094815 main.go:141] libmachine: (addons-442145)   <vcpu>2</vcpu>
	I0819 11:56:53.804720 2094815 main.go:141] libmachine: (addons-442145)   <features>
	I0819 11:56:53.804728 2094815 main.go:141] libmachine: (addons-442145)     <acpi/>
	I0819 11:56:53.804734 2094815 main.go:141] libmachine: (addons-442145)     <apic/>
	I0819 11:56:53.804741 2094815 main.go:141] libmachine: (addons-442145)     <pae/>
	I0819 11:56:53.804754 2094815 main.go:141] libmachine: (addons-442145)     
	I0819 11:56:53.804763 2094815 main.go:141] libmachine: (addons-442145)   </features>
	I0819 11:56:53.804772 2094815 main.go:141] libmachine: (addons-442145)   <cpu mode='host-passthrough'>
	I0819 11:56:53.804779 2094815 main.go:141] libmachine: (addons-442145)   
	I0819 11:56:53.804788 2094815 main.go:141] libmachine: (addons-442145)   </cpu>
	I0819 11:56:53.804794 2094815 main.go:141] libmachine: (addons-442145)   <os>
	I0819 11:56:53.804799 2094815 main.go:141] libmachine: (addons-442145)     <type>hvm</type>
	I0819 11:56:53.804805 2094815 main.go:141] libmachine: (addons-442145)     <boot dev='cdrom'/>
	I0819 11:56:53.804810 2094815 main.go:141] libmachine: (addons-442145)     <boot dev='hd'/>
	I0819 11:56:53.804816 2094815 main.go:141] libmachine: (addons-442145)     <bootmenu enable='no'/>
	I0819 11:56:53.804822 2094815 main.go:141] libmachine: (addons-442145)   </os>
	I0819 11:56:53.804827 2094815 main.go:141] libmachine: (addons-442145)   <devices>
	I0819 11:56:53.804832 2094815 main.go:141] libmachine: (addons-442145)     <disk type='file' device='cdrom'>
	I0819 11:56:53.804867 2094815 main.go:141] libmachine: (addons-442145)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/boot2docker.iso'/>
	I0819 11:56:53.804886 2094815 main.go:141] libmachine: (addons-442145)       <target dev='hdc' bus='scsi'/>
	I0819 11:56:53.804927 2094815 main.go:141] libmachine: (addons-442145)       <readonly/>
	I0819 11:56:53.804954 2094815 main.go:141] libmachine: (addons-442145)     </disk>
	I0819 11:56:53.804966 2094815 main.go:141] libmachine: (addons-442145)     <disk type='file' device='disk'>
	I0819 11:56:53.804979 2094815 main.go:141] libmachine: (addons-442145)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 11:56:53.805012 2094815 main.go:141] libmachine: (addons-442145)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/addons-442145.rawdisk'/>
	I0819 11:56:53.805027 2094815 main.go:141] libmachine: (addons-442145)       <target dev='hda' bus='virtio'/>
	I0819 11:56:53.805061 2094815 main.go:141] libmachine: (addons-442145)     </disk>
	I0819 11:56:53.805084 2094815 main.go:141] libmachine: (addons-442145)     <interface type='network'>
	I0819 11:56:53.805096 2094815 main.go:141] libmachine: (addons-442145)       <source network='mk-addons-442145'/>
	I0819 11:56:53.805108 2094815 main.go:141] libmachine: (addons-442145)       <model type='virtio'/>
	I0819 11:56:53.805119 2094815 main.go:141] libmachine: (addons-442145)     </interface>
	I0819 11:56:53.805131 2094815 main.go:141] libmachine: (addons-442145)     <interface type='network'>
	I0819 11:56:53.805144 2094815 main.go:141] libmachine: (addons-442145)       <source network='default'/>
	I0819 11:56:53.805155 2094815 main.go:141] libmachine: (addons-442145)       <model type='virtio'/>
	I0819 11:56:53.805167 2094815 main.go:141] libmachine: (addons-442145)     </interface>
	I0819 11:56:53.805178 2094815 main.go:141] libmachine: (addons-442145)     <serial type='pty'>
	I0819 11:56:53.805201 2094815 main.go:141] libmachine: (addons-442145)       <target port='0'/>
	I0819 11:56:53.805223 2094815 main.go:141] libmachine: (addons-442145)     </serial>
	I0819 11:56:53.805235 2094815 main.go:141] libmachine: (addons-442145)     <console type='pty'>
	I0819 11:56:53.805244 2094815 main.go:141] libmachine: (addons-442145)       <target type='serial' port='0'/>
	I0819 11:56:53.805255 2094815 main.go:141] libmachine: (addons-442145)     </console>
	I0819 11:56:53.805265 2094815 main.go:141] libmachine: (addons-442145)     <rng model='virtio'>
	I0819 11:56:53.805282 2094815 main.go:141] libmachine: (addons-442145)       <backend model='random'>/dev/random</backend>
	I0819 11:56:53.805296 2094815 main.go:141] libmachine: (addons-442145)     </rng>
	I0819 11:56:53.805306 2094815 main.go:141] libmachine: (addons-442145)     
	I0819 11:56:53.805314 2094815 main.go:141] libmachine: (addons-442145)     
	I0819 11:56:53.805325 2094815 main.go:141] libmachine: (addons-442145)   </devices>
	I0819 11:56:53.805337 2094815 main.go:141] libmachine: (addons-442145) </domain>
	I0819 11:56:53.805345 2094815 main.go:141] libmachine: (addons-442145) 
	I0819 11:56:53.809662 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:d1:71:35 in network default
	I0819 11:56:53.810217 2094815 main.go:141] libmachine: (addons-442145) Ensuring networks are active...
	I0819 11:56:53.810240 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:53.810982 2094815 main.go:141] libmachine: (addons-442145) Ensuring network default is active
	I0819 11:56:53.811304 2094815 main.go:141] libmachine: (addons-442145) Ensuring network mk-addons-442145 is active
	I0819 11:56:53.811856 2094815 main.go:141] libmachine: (addons-442145) Getting domain xml...
	I0819 11:56:53.812674 2094815 main.go:141] libmachine: (addons-442145) Creating domain...
	I0819 11:56:55.003250 2094815 main.go:141] libmachine: (addons-442145) Waiting to get IP...
	I0819 11:56:55.004077 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:55.004477 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:55.004498 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:55.004451 2094837 retry.go:31] will retry after 305.688251ms: waiting for machine to come up
	I0819 11:56:55.312281 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:55.312816 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:55.312848 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:55.312764 2094837 retry.go:31] will retry after 256.992875ms: waiting for machine to come up
	I0819 11:56:55.571588 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:55.572365 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:55.572398 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:55.572328 2094837 retry.go:31] will retry after 333.382572ms: waiting for machine to come up
	I0819 11:56:55.906859 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:55.907261 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:55.907302 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:55.907220 2094837 retry.go:31] will retry after 527.606663ms: waiting for machine to come up
	I0819 11:56:56.436945 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:56.437464 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:56.437493 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:56.437407 2094837 retry.go:31] will retry after 696.017259ms: waiting for machine to come up
	I0819 11:56:57.135602 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:57.136128 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:57.136175 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:57.136090 2094837 retry.go:31] will retry after 639.282514ms: waiting for machine to come up
	I0819 11:56:57.776869 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:57.777325 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:57.777355 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:57.777285 2094837 retry.go:31] will retry after 1.177662974s: waiting for machine to come up
	I0819 11:56:58.956745 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:56:58.957115 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:56:58.957137 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:56:58.957079 2094837 retry.go:31] will retry after 1.305160326s: waiting for machine to come up
	I0819 11:57:00.264660 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:00.265092 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:57:00.265125 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:57:00.265041 2094837 retry.go:31] will retry after 1.174424389s: waiting for machine to come up
	I0819 11:57:01.441319 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:01.441690 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:57:01.441717 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:57:01.441661 2094837 retry.go:31] will retry after 1.42161492s: waiting for machine to come up
	I0819 11:57:02.865080 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:02.865511 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:57:02.865547 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:57:02.865452 2094837 retry.go:31] will retry after 1.822850539s: waiting for machine to come up
	I0819 11:57:04.689761 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:04.690195 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:57:04.690215 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:57:04.690175 2094837 retry.go:31] will retry after 2.578005011s: waiting for machine to come up
	I0819 11:57:07.270904 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:07.271284 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:57:07.271305 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:57:07.271269 2094837 retry.go:31] will retry after 3.74788868s: waiting for machine to come up
	I0819 11:57:11.023591 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:11.024020 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find current IP address of domain addons-442145 in network mk-addons-442145
	I0819 11:57:11.024047 2094815 main.go:141] libmachine: (addons-442145) DBG | I0819 11:57:11.023971 2094837 retry.go:31] will retry after 4.593179123s: waiting for machine to come up
	I0819 11:57:15.620529 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.621040 2094815 main.go:141] libmachine: (addons-442145) Found IP for machine: 192.168.39.65
	I0819 11:57:15.621064 2094815 main.go:141] libmachine: (addons-442145) Reserving static IP address...
	I0819 11:57:15.621075 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has current primary IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.621390 2094815 main.go:141] libmachine: (addons-442145) DBG | unable to find host DHCP lease matching {name: "addons-442145", mac: "52:54:00:9d:f2:1a", ip: "192.168.39.65"} in network mk-addons-442145
	I0819 11:57:15.715288 2094815 main.go:141] libmachine: (addons-442145) DBG | Getting to WaitForSSH function...
	I0819 11:57:15.715323 2094815 main.go:141] libmachine: (addons-442145) Reserved static IP address: 192.168.39.65
	I0819 11:57:15.715336 2094815 main.go:141] libmachine: (addons-442145) Waiting for SSH to be available...
	I0819 11:57:15.718339 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.718867 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:15.718903 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.719084 2094815 main.go:141] libmachine: (addons-442145) DBG | Using SSH client type: external
	I0819 11:57:15.719111 2094815 main.go:141] libmachine: (addons-442145) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa (-rw-------)
	I0819 11:57:15.719150 2094815 main.go:141] libmachine: (addons-442145) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 11:57:15.719165 2094815 main.go:141] libmachine: (addons-442145) DBG | About to run SSH command:
	I0819 11:57:15.719181 2094815 main.go:141] libmachine: (addons-442145) DBG | exit 0
	I0819 11:57:15.844638 2094815 main.go:141] libmachine: (addons-442145) DBG | SSH cmd err, output: <nil>: 
	I0819 11:57:15.844904 2094815 main.go:141] libmachine: (addons-442145) KVM machine creation complete!
	I0819 11:57:15.845224 2094815 main.go:141] libmachine: (addons-442145) Calling .GetConfigRaw
	I0819 11:57:15.847748 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:15.847978 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:15.848203 2094815 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 11:57:15.848225 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:15.849714 2094815 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 11:57:15.849733 2094815 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 11:57:15.849742 2094815 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 11:57:15.849772 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:15.852476 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.852774 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:15.852806 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.853065 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:15.853242 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:15.853413 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:15.853591 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:15.853768 2094815 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:15.853971 2094815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0819 11:57:15.853984 2094815 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 11:57:15.955677 2094815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:57:15.955704 2094815 main.go:141] libmachine: Detecting the provisioner...
	I0819 11:57:15.955712 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:15.958741 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.959105 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:15.959138 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:15.959313 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:15.959560 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:15.959749 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:15.959900 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:15.960066 2094815 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:15.960294 2094815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0819 11:57:15.960309 2094815 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 11:57:16.064922 2094815 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 11:57:16.065003 2094815 main.go:141] libmachine: found compatible host: buildroot
	I0819 11:57:16.065015 2094815 main.go:141] libmachine: Provisioning with buildroot...
	I0819 11:57:16.065027 2094815 main.go:141] libmachine: (addons-442145) Calling .GetMachineName
	I0819 11:57:16.065298 2094815 buildroot.go:166] provisioning hostname "addons-442145"
	I0819 11:57:16.065335 2094815 main.go:141] libmachine: (addons-442145) Calling .GetMachineName
	I0819 11:57:16.065521 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:16.068249 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.068629 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.068675 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.068784 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:16.068965 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.069116 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.069252 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:16.069396 2094815 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:16.069586 2094815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0819 11:57:16.069601 2094815 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-442145 && echo "addons-442145" | sudo tee /etc/hostname
	I0819 11:57:16.187025 2094815 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-442145
	
	I0819 11:57:16.187072 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:16.189948 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.190310 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.190342 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.190471 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:16.190685 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.190862 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.190990 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:16.191160 2094815 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:16.191411 2094815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0819 11:57:16.191434 2094815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-442145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-442145/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-442145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:57:16.301392 2094815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:57:16.301445 2094815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 11:57:16.301515 2094815 buildroot.go:174] setting up certificates
	I0819 11:57:16.301536 2094815 provision.go:84] configureAuth start
	I0819 11:57:16.301561 2094815 main.go:141] libmachine: (addons-442145) Calling .GetMachineName
	I0819 11:57:16.301906 2094815 main.go:141] libmachine: (addons-442145) Calling .GetIP
	I0819 11:57:16.304975 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.305363 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.305394 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.305592 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:16.307976 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.308392 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.308419 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.308568 2094815 provision.go:143] copyHostCerts
	I0819 11:57:16.308649 2094815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 11:57:16.308786 2094815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 11:57:16.308873 2094815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 11:57:16.308945 2094815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.addons-442145 san=[127.0.0.1 192.168.39.65 addons-442145 localhost minikube]
	I0819 11:57:16.546651 2094815 provision.go:177] copyRemoteCerts
	I0819 11:57:16.546716 2094815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:57:16.546746 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:16.549703 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.550007 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.550029 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.550221 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:16.550429 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.550591 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:16.550754 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:16.637278 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 11:57:16.663873 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 11:57:16.688085 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 11:57:16.712248 2094815 provision.go:87] duration metric: took 410.694168ms to configureAuth
	I0819 11:57:16.712280 2094815 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:57:16.712449 2094815 config.go:182] Loaded profile config "addons-442145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:57:16.712530 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:16.715310 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.715666 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.715687 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.715857 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:16.716079 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.716250 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.716415 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:16.716613 2094815 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:16.716817 2094815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0819 11:57:16.716836 2094815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 11:57:16.968249 2094815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 11:57:16.968281 2094815 main.go:141] libmachine: Checking connection to Docker...
	I0819 11:57:16.968289 2094815 main.go:141] libmachine: (addons-442145) Calling .GetURL
	I0819 11:57:16.969651 2094815 main.go:141] libmachine: (addons-442145) DBG | Using libvirt version 6000000
	I0819 11:57:16.971995 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.972350 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.972379 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.972606 2094815 main.go:141] libmachine: Docker is up and running!
	I0819 11:57:16.972625 2094815 main.go:141] libmachine: Reticulating splines...
	I0819 11:57:16.972632 2094815 client.go:171] duration metric: took 24.40194727s to LocalClient.Create
	I0819 11:57:16.972656 2094815 start.go:167] duration metric: took 24.402010567s to libmachine.API.Create "addons-442145"
	I0819 11:57:16.972667 2094815 start.go:293] postStartSetup for "addons-442145" (driver="kvm2")
	I0819 11:57:16.972676 2094815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:57:16.972712 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:16.972952 2094815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:57:16.972982 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:16.975108 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.975458 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:16.975484 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:16.975670 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:16.975829 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:16.976001 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:16.976109 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:17.058504 2094815 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:57:17.062819 2094815 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 11:57:17.062847 2094815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 11:57:17.062938 2094815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 11:57:17.062978 2094815 start.go:296] duration metric: took 90.295043ms for postStartSetup
	I0819 11:57:17.063027 2094815 main.go:141] libmachine: (addons-442145) Calling .GetConfigRaw
	I0819 11:57:17.063686 2094815 main.go:141] libmachine: (addons-442145) Calling .GetIP
	I0819 11:57:17.066377 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.066777 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:17.066802 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.066982 2094815 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/config.json ...
	I0819 11:57:17.067199 2094815 start.go:128] duration metric: took 24.514284037s to createHost
	I0819 11:57:17.067228 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:17.069466 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.069776 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:17.069801 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.069954 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:17.070143 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:17.070323 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:17.070446 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:17.070633 2094815 main.go:141] libmachine: Using SSH client type: native
	I0819 11:57:17.070795 2094815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0819 11:57:17.070806 2094815 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:57:17.172667 2094815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724068637.149629524
	
	I0819 11:57:17.172708 2094815 fix.go:216] guest clock: 1724068637.149629524
	I0819 11:57:17.172722 2094815 fix.go:229] Guest: 2024-08-19 11:57:17.149629524 +0000 UTC Remote: 2024-08-19 11:57:17.067213451 +0000 UTC m=+24.618595762 (delta=82.416073ms)
	I0819 11:57:17.172797 2094815 fix.go:200] guest clock delta is within tolerance: 82.416073ms
	I0819 11:57:17.172806 2094815 start.go:83] releasing machines lock for "addons-442145", held for 24.619973795s
	I0819 11:57:17.172843 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:17.173134 2094815 main.go:141] libmachine: (addons-442145) Calling .GetIP
	I0819 11:57:17.175770 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.176153 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:17.176204 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.176336 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:17.176864 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:17.177050 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:17.177132 2094815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:57:17.177217 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:17.177245 2094815 ssh_runner.go:195] Run: cat /version.json
	I0819 11:57:17.177266 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:17.179785 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.179931 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.180140 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:17.180179 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.180293 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:17.180315 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:17.180317 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:17.180491 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:17.180505 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:17.180673 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:17.180698 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:17.180826 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:17.180827 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:17.180954 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:17.284189 2094815 ssh_runner.go:195] Run: systemctl --version
	I0819 11:57:17.290157 2094815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 11:57:17.455135 2094815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:57:17.461485 2094815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:57:17.461548 2094815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 11:57:17.480132 2094815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 11:57:17.480154 2094815 start.go:495] detecting cgroup driver to use...
	I0819 11:57:17.480214 2094815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:57:17.500432 2094815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:57:17.515964 2094815 docker.go:217] disabling cri-docker service (if available) ...
	I0819 11:57:17.516013 2094815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 11:57:17.529514 2094815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 11:57:17.544663 2094815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 11:57:17.680445 2094815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 11:57:17.837786 2094815 docker.go:233] disabling docker service ...
	I0819 11:57:17.837863 2094815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 11:57:17.851884 2094815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 11:57:17.864472 2094815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 11:57:17.994507 2094815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 11:57:18.125073 2094815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 11:57:18.138335 2094815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:57:18.156461 2094815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 11:57:18.156531 2094815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:18.167173 2094815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 11:57:18.167227 2094815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:18.177819 2094815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:18.187852 2094815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:18.197614 2094815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:57:18.207875 2094815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:18.217992 2094815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:18.234018 2094815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:57:18.244002 2094815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:57:18.253243 2094815 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 11:57:18.253290 2094815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 11:57:18.266364 2094815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:57:18.275671 2094815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:57:18.394221 2094815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 11:57:18.531268 2094815 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 11:57:18.531371 2094815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 11:57:18.536187 2094815 start.go:563] Will wait 60s for crictl version
	I0819 11:57:18.536260 2094815 ssh_runner.go:195] Run: which crictl
	I0819 11:57:18.539847 2094815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:57:18.577401 2094815 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 11:57:18.577543 2094815 ssh_runner.go:195] Run: crio --version
	I0819 11:57:18.603938 2094815 ssh_runner.go:195] Run: crio --version
	I0819 11:57:18.631705 2094815 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 11:57:18.632884 2094815 main.go:141] libmachine: (addons-442145) Calling .GetIP
	I0819 11:57:18.635593 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:18.635969 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:18.635994 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:18.636362 2094815 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 11:57:18.640151 2094815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:57:18.652107 2094815 kubeadm.go:883] updating cluster {Name:addons-442145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-442145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 11:57:18.652246 2094815 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:57:18.652307 2094815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:57:18.683452 2094815 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 11:57:18.683529 2094815 ssh_runner.go:195] Run: which lz4
	I0819 11:57:18.687583 2094815 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 11:57:18.691636 2094815 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 11:57:18.691665 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 11:57:19.943994 2094815 crio.go:462] duration metric: took 1.256445821s to copy over tarball
	I0819 11:57:19.944102 2094815 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 11:57:22.023125 2094815 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.078969951s)
	I0819 11:57:22.023161 2094815 crio.go:469] duration metric: took 2.079130592s to extract the tarball
	I0819 11:57:22.023169 2094815 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 11:57:22.061657 2094815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:57:22.104809 2094815 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 11:57:22.104837 2094815 cache_images.go:84] Images are preloaded, skipping loading
	I0819 11:57:22.104846 2094815 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.31.0 crio true true} ...
	I0819 11:57:22.104968 2094815 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-442145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-442145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:57:22.105039 2094815 ssh_runner.go:195] Run: crio config
	I0819 11:57:22.152875 2094815 cni.go:84] Creating CNI manager for ""
	I0819 11:57:22.152899 2094815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 11:57:22.152910 2094815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 11:57:22.152936 2094815 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-442145 NodeName:addons-442145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 11:57:22.153097 2094815 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-442145"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 11:57:22.153177 2094815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 11:57:22.163740 2094815 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 11:57:22.163839 2094815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 11:57:22.173014 2094815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0819 11:57:22.189055 2094815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:57:22.204442 2094815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0819 11:57:22.220010 2094815 ssh_runner.go:195] Run: grep 192.168.39.65	control-plane.minikube.internal$ /etc/hosts
	I0819 11:57:22.223740 2094815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:57:22.235569 2094815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:57:22.357514 2094815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:57:22.375348 2094815 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145 for IP: 192.168.39.65
	I0819 11:57:22.375379 2094815 certs.go:194] generating shared ca certs ...
	I0819 11:57:22.375403 2094815 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:22.375605 2094815 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 11:57:22.971448 2094815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt ...
	I0819 11:57:22.971487 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt: {Name:mka0297c7b1857162b4d15fa199e14a0b441de45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:22.971811 2094815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key ...
	I0819 11:57:22.971839 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key: {Name:mka4e1d0997125aeae0f1f57e795b0c6e3edc1f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:22.971968 2094815 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 11:57:23.193860 2094815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt ...
	I0819 11:57:23.193894 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt: {Name:mk2f58fe458c077be1f44279e6df9b0b93ca2f1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:23.194102 2094815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key ...
	I0819 11:57:23.194123 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key: {Name:mkb9e5a393787f71234dac42cae557aec592d278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:23.194233 2094815 certs.go:256] generating profile certs ...
	I0819 11:57:23.194324 2094815 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.key
	I0819 11:57:23.194359 2094815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt with IP's: []
	I0819 11:57:23.476492 2094815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt ...
	I0819 11:57:23.476530 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: {Name:mkb3e0f02ffff71610e3eeb94d54caada6d612bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:23.476742 2094815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.key ...
	I0819 11:57:23.476759 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.key: {Name:mk98be8b936ae078011d638e96490d70fa4cd538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:23.476864 2094815 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.key.c064630f
	I0819 11:57:23.476886 2094815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.crt.c064630f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65]
	I0819 11:57:23.657327 2094815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.crt.c064630f ...
	I0819 11:57:23.657371 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.crt.c064630f: {Name:mk8e9b79a8c57b80cbe867fdf3f5be9e1482b1aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:23.657604 2094815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.key.c064630f ...
	I0819 11:57:23.657627 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.key.c064630f: {Name:mk426718edcae2a1adb428729b94abd73c838576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:23.657748 2094815 certs.go:381] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.crt.c064630f -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.crt
	I0819 11:57:23.657866 2094815 certs.go:385] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.key.c064630f -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.key
	I0819 11:57:23.657950 2094815 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.key
	I0819 11:57:23.657981 2094815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.crt with IP's: []
	I0819 11:57:24.005260 2094815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.crt ...
	I0819 11:57:24.005294 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.crt: {Name:mkdd6f0319931e35fc36f2fbe0654c73e6a3aff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:24.005578 2094815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.key ...
	I0819 11:57:24.005615 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.key: {Name:mkece11db20ab97c6baea62eb66ccae006f7d980 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:24.005908 2094815 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:57:24.005957 2094815 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 11:57:24.005995 2094815 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:57:24.006029 2094815 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 11:57:24.006723 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:57:24.049247 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:57:24.094161 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:57:24.119278 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 11:57:24.142385 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 11:57:24.165914 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 11:57:24.189542 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:57:24.213053 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 11:57:24.235959 2094815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:57:24.258911 2094815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 11:57:24.276073 2094815 ssh_runner.go:195] Run: openssl version
	I0819 11:57:24.281972 2094815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:57:24.292844 2094815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:57:24.297232 2094815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:57:24.297307 2094815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:57:24.302971 2094815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:57:24.313731 2094815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:57:24.317680 2094815 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 11:57:24.317732 2094815 kubeadm.go:392] StartCluster: {Name:addons-442145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-442145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:24.317824 2094815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 11:57:24.317908 2094815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 11:57:24.365294 2094815 cri.go:89] found id: ""
	I0819 11:57:24.365370 2094815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 11:57:24.375209 2094815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:57:24.384667 2094815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:57:24.394313 2094815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:57:24.394334 2094815 kubeadm.go:157] found existing configuration files:
	
	I0819 11:57:24.394389 2094815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 11:57:24.403638 2094815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:57:24.403712 2094815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:57:24.413346 2094815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 11:57:24.422769 2094815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:57:24.422847 2094815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:57:24.432137 2094815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 11:57:24.441012 2094815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:57:24.441079 2094815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:57:24.450410 2094815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 11:57:24.459229 2094815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:57:24.459305 2094815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:57:24.468345 2094815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 11:57:24.515669 2094815 kubeadm.go:310] W0819 11:57:24.500205     840 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 11:57:24.516629 2094815 kubeadm.go:310] W0819 11:57:24.501439     840 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 11:57:24.615797 2094815 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 11:57:34.378940 2094815 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 11:57:34.379045 2094815 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 11:57:34.379185 2094815 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 11:57:34.379297 2094815 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 11:57:34.379432 2094815 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 11:57:34.379493 2094815 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 11:57:34.381012 2094815 out.go:235]   - Generating certificates and keys ...
	I0819 11:57:34.381103 2094815 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 11:57:34.381206 2094815 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 11:57:34.381297 2094815 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 11:57:34.381345 2094815 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 11:57:34.381404 2094815 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 11:57:34.381450 2094815 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 11:57:34.381514 2094815 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 11:57:34.381619 2094815 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-442145 localhost] and IPs [192.168.39.65 127.0.0.1 ::1]
	I0819 11:57:34.381696 2094815 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 11:57:34.381842 2094815 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-442145 localhost] and IPs [192.168.39.65 127.0.0.1 ::1]
	I0819 11:57:34.381904 2094815 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 11:57:34.381962 2094815 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 11:57:34.382002 2094815 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 11:57:34.382056 2094815 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 11:57:34.382106 2094815 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 11:57:34.382179 2094815 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 11:57:34.382249 2094815 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 11:57:34.382321 2094815 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 11:57:34.382383 2094815 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 11:57:34.382465 2094815 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 11:57:34.382537 2094815 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 11:57:34.384537 2094815 out.go:235]   - Booting up control plane ...
	I0819 11:57:34.384626 2094815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 11:57:34.384742 2094815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 11:57:34.384840 2094815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 11:57:34.384961 2094815 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 11:57:34.385049 2094815 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 11:57:34.385110 2094815 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 11:57:34.385280 2094815 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 11:57:34.385375 2094815 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 11:57:34.385431 2094815 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.979821ms
	I0819 11:57:34.385499 2094815 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 11:57:34.385554 2094815 kubeadm.go:310] [api-check] The API server is healthy after 5.002073141s
	I0819 11:57:34.385646 2094815 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 11:57:34.385752 2094815 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 11:57:34.385802 2094815 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 11:57:34.385972 2094815 kubeadm.go:310] [mark-control-plane] Marking the node addons-442145 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 11:57:34.386046 2094815 kubeadm.go:310] [bootstrap-token] Using token: 9y2rkr.4iuj11aems6pns8i
	I0819 11:57:34.387475 2094815 out.go:235]   - Configuring RBAC rules ...
	I0819 11:57:34.387608 2094815 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 11:57:34.387722 2094815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 11:57:34.387891 2094815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 11:57:34.388025 2094815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 11:57:34.388182 2094815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 11:57:34.388321 2094815 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 11:57:34.388462 2094815 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 11:57:34.388499 2094815 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 11:57:34.388540 2094815 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 11:57:34.388546 2094815 kubeadm.go:310] 
	I0819 11:57:34.388591 2094815 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 11:57:34.388598 2094815 kubeadm.go:310] 
	I0819 11:57:34.388677 2094815 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 11:57:34.388685 2094815 kubeadm.go:310] 
	I0819 11:57:34.388706 2094815 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 11:57:34.388796 2094815 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 11:57:34.388868 2094815 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 11:57:34.388891 2094815 kubeadm.go:310] 
	I0819 11:57:34.388973 2094815 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 11:57:34.388985 2094815 kubeadm.go:310] 
	I0819 11:57:34.389046 2094815 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 11:57:34.389057 2094815 kubeadm.go:310] 
	I0819 11:57:34.389135 2094815 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 11:57:34.389281 2094815 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 11:57:34.389383 2094815 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 11:57:34.389389 2094815 kubeadm.go:310] 
	I0819 11:57:34.389509 2094815 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 11:57:34.389608 2094815 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 11:57:34.389621 2094815 kubeadm.go:310] 
	I0819 11:57:34.389739 2094815 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9y2rkr.4iuj11aems6pns8i \
	I0819 11:57:34.389890 2094815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 11:57:34.389914 2094815 kubeadm.go:310] 	--control-plane 
	I0819 11:57:34.389920 2094815 kubeadm.go:310] 
	I0819 11:57:34.389995 2094815 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 11:57:34.390001 2094815 kubeadm.go:310] 
	I0819 11:57:34.390067 2094815 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9y2rkr.4iuj11aems6pns8i \
	I0819 11:57:34.390183 2094815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 11:57:34.390196 2094815 cni.go:84] Creating CNI manager for ""
	I0819 11:57:34.390213 2094815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 11:57:34.391725 2094815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 11:57:34.392874 2094815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 11:57:34.404390 2094815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 11:57:34.423587 2094815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 11:57:34.423699 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:34.423728 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-442145 minikube.k8s.io/updated_at=2024_08_19T11_57_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=addons-442145 minikube.k8s.io/primary=true
	I0819 11:57:34.449659 2094815 ops.go:34] apiserver oom_adj: -16
	I0819 11:57:34.529363 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:35.030024 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:35.529541 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:36.030427 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:36.529478 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:37.029725 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:37.529853 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:38.030014 2094815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:57:38.106439 2094815 kubeadm.go:1113] duration metric: took 3.682830959s to wait for elevateKubeSystemPrivileges
	I0819 11:57:38.106479 2094815 kubeadm.go:394] duration metric: took 13.788753152s to StartCluster
	I0819 11:57:38.106505 2094815 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:38.106653 2094815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 11:57:38.107163 2094815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:38.107390 2094815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 11:57:38.107416 2094815 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:57:38.107502 2094815 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 11:57:38.107604 2094815 config.go:182] Loaded profile config "addons-442145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:57:38.107613 2094815 addons.go:69] Setting yakd=true in profile "addons-442145"
	I0819 11:57:38.107645 2094815 addons.go:234] Setting addon yakd=true in "addons-442145"
	I0819 11:57:38.107649 2094815 addons.go:69] Setting inspektor-gadget=true in profile "addons-442145"
	I0819 11:57:38.107668 2094815 addons.go:69] Setting registry=true in profile "addons-442145"
	I0819 11:57:38.107691 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.107701 2094815 addons.go:69] Setting gcp-auth=true in profile "addons-442145"
	I0819 11:57:38.107725 2094815 addons.go:234] Setting addon inspektor-gadget=true in "addons-442145"
	I0819 11:57:38.107724 2094815 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-442145"
	I0819 11:57:38.107745 2094815 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-442145"
	I0819 11:57:38.107762 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.107764 2094815 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-442145"
	I0819 11:57:38.107743 2094815 mustload.go:65] Loading cluster: addons-442145
	I0819 11:57:38.107803 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.107805 2094815 addons.go:69] Setting metrics-server=true in profile "addons-442145"
	I0819 11:57:38.107829 2094815 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-442145"
	I0819 11:57:38.107841 2094815 addons.go:234] Setting addon metrics-server=true in "addons-442145"
	I0819 11:57:38.107864 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.107971 2094815 addons.go:69] Setting cloud-spanner=true in profile "addons-442145"
	I0819 11:57:38.107992 2094815 config.go:182] Loaded profile config "addons-442145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:57:38.108002 2094815 addons.go:234] Setting addon cloud-spanner=true in "addons-442145"
	I0819 11:57:38.108023 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.108202 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.108210 2094815 addons.go:234] Setting addon registry=true in "addons-442145"
	I0819 11:57:38.108225 2094815 addons.go:69] Setting volcano=true in profile "addons-442145"
	I0819 11:57:38.108233 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108234 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.108246 2094815 addons.go:234] Setting addon volcano=true in "addons-442145"
	I0819 11:57:38.108248 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.107866 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.108267 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.108273 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108342 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.108370 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108389 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.108407 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108577 2094815 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-442145"
	I0819 11:57:38.108621 2094815 addons.go:69] Setting volumesnapshots=true in profile "addons-442145"
	I0819 11:57:38.108651 2094815 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-442145"
	I0819 11:57:38.108655 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.108674 2094815 addons.go:234] Setting addon volumesnapshots=true in "addons-442145"
	I0819 11:57:38.108692 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108708 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.108593 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.108766 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108608 2094815 addons.go:69] Setting default-storageclass=true in profile "addons-442145"
	I0819 11:57:38.108980 2094815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-442145"
	I0819 11:57:38.108630 2094815 addons.go:69] Setting ingress=true in profile "addons-442145"
	I0819 11:57:38.109037 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.109045 2094815 addons.go:234] Setting addon ingress=true in "addons-442145"
	I0819 11:57:38.109057 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.109088 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.109143 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.108203 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.109167 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.109196 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.109376 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.109409 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.109475 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.109495 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108614 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.109696 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108630 2094815 addons.go:69] Setting ingress-dns=true in profile "addons-442145"
	I0819 11:57:38.110281 2094815 addons.go:234] Setting addon ingress-dns=true in "addons-442145"
	I0819 11:57:38.110337 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.107654 2094815 addons.go:69] Setting storage-provisioner=true in profile "addons-442145"
	I0819 11:57:38.110458 2094815 addons.go:234] Setting addon storage-provisioner=true in "addons-442145"
	I0819 11:57:38.110495 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.110709 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.110742 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.110860 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.110902 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.108640 2094815 addons.go:69] Setting helm-tiller=true in profile "addons-442145"
	I0819 11:57:38.111053 2094815 addons.go:234] Setting addon helm-tiller=true in "addons-442145"
	I0819 11:57:38.111089 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.108614 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.116267 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.116422 2094815 out.go:177] * Verifying Kubernetes components...
	I0819 11:57:38.117928 2094815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:57:38.129190 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35659
	I0819 11:57:38.129398 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39415
	I0819 11:57:38.129497 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39075
	I0819 11:57:38.129918 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.130036 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.130091 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.130442 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.130462 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.130640 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.130660 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.130782 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.130796 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.130908 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.131000 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.131150 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.131180 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.131580 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.131621 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.131693 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.131714 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.133055 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.133425 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.133476 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.138993 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0819 11:57:38.144130 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35143
	I0819 11:57:38.145515 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.145519 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.145605 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42003
	I0819 11:57:38.146381 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.146401 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.146774 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.146800 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.146838 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.147040 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.147400 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.147461 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.147599 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.147651 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
	I0819 11:57:38.147807 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.148275 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.148192 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.148686 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.148890 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.148930 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.148963 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.149424 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I0819 11:57:38.149878 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.150113 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.150180 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.150331 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.150477 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.150362 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36515
	I0819 11:57:38.150824 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.151253 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.151266 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.151317 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.151958 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.152003 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.156485 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.160645 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.160723 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0819 11:57:38.160731 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.161318 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.161337 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.161562 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.161745 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.162375 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.162424 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.162720 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.162739 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.163107 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.163260 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.164879 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.167307 2094815 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-442145"
	I0819 11:57:38.167358 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.167741 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.167789 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.167892 2094815 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 11:57:38.169077 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 11:57:38.169102 2094815 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 11:57:38.169127 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.172505 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.172977 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.173011 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.173311 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.173527 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.173685 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.173818 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.194814 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I0819 11:57:38.195440 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.196008 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.196028 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.196397 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.196604 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.198597 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38017
	I0819 11:57:38.199936 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.200959 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I0819 11:57:38.201524 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.201970 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 11:57:38.202109 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.202126 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.202197 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0819 11:57:38.202705 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.202784 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.203031 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.203761 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.203779 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.205488 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 11:57:38.206513 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42485
	I0819 11:57:38.206555 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I0819 11:57:38.206573 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.206520 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44227
	I0819 11:57:38.206669 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
	I0819 11:57:38.206739 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.207163 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.207281 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.207660 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.207706 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.207790 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.207807 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.207977 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 11:57:38.208283 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.208282 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.208342 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.208638 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.208662 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.208736 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.209194 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.209231 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.209417 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.209612 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39159
	I0819 11:57:38.209925 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.209954 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.210026 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.210089 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.210195 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.210209 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.210377 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 11:57:38.210626 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0819 11:57:38.210756 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.210799 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.211088 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.211165 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.211326 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.212744 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 11:57:38.213133 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.213148 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.213224 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.213281 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I0819 11:57:38.213435 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.213496 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40925
	I0819 11:57:38.213706 2094815 addons.go:234] Setting addon default-storageclass=true in "addons-442145"
	I0819 11:57:38.213747 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:38.213995 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.214031 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.214140 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.214174 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.214306 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.214348 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42173
	I0819 11:57:38.214388 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46583
	I0819 11:57:38.214786 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.214810 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.214862 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 11:57:38.214915 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.214884 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.214918 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.215028 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.215159 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.215669 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.215693 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.215830 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.215861 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.216507 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.216525 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.216571 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.216875 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.216968 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 11:57:38.217277 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.217312 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.217316 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.217659 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.218655 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.218680 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.218748 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.219332 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 11:57:38.219392 2094815 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 11:57:38.220047 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.220226 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.220353 2094815 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 11:57:38.220373 2094815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 11:57:38.220395 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.220522 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.220593 2094815 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 11:57:38.220613 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 11:57:38.220649 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.220892 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.222292 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.222310 2094815 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 11:57:38.222331 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.222844 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.223289 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.224531 2094815 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 11:57:38.224942 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.224986 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.225544 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.225583 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.225700 2094815 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 11:57:38.225721 2094815 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 11:57:38.225747 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.225784 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.225786 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.225864 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.225966 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.225996 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.226137 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.226323 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.226479 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.226927 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.227652 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.228340 2094815 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 11:57:38.228348 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.228360 2094815 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 11:57:38.228389 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.229264 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.229408 2094815 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 11:57:38.229778 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.230228 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.230255 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.230408 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.230666 2094815 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 11:57:38.230685 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 11:57:38.230703 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.231274 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.231513 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.231641 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.232224 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.232654 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
	I0819 11:57:38.232839 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.232906 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.232926 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.232958 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.233168 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.233413 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.233820 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.234504 2094815 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 11:57:38.235391 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.235595 2094815 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 11:57:38.235617 2094815 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 11:57:38.235636 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.235987 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.236016 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.236201 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.236433 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.236559 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.236693 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.237121 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.237779 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.237801 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.238187 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.238705 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.238748 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.238950 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.239137 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.239171 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.239436 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.239641 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.239856 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.240013 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.241631 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36641
	I0819 11:57:38.242009 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.242508 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.242529 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.242854 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.243001 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.244655 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.246488 2094815 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 11:57:38.247715 2094815 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 11:57:38.248743 2094815 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 11:57:38.249454 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39957
	I0819 11:57:38.249858 2094815 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 11:57:38.249882 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 11:57:38.249905 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.250401 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.250946 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.250971 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.251382 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.251957 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:38.252006 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:38.253276 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.253737 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.253766 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.253952 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.254114 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.254243 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.254382 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.256985 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
	I0819 11:57:38.257536 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.258159 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.258187 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.259518 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.259742 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.260928 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I0819 11:57:38.261547 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.262100 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.262119 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.262544 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.262769 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.263302 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34859
	I0819 11:57:38.263460 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0819 11:57:38.264037 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.264121 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.264908 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.264927 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.265036 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.265340 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.265498 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39705
	I0819 11:57:38.265652 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:38.265662 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:38.265865 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:38.265899 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:38.265907 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:38.265916 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:38.265924 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:38.265948 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.266737 2094815 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 11:57:38.267574 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:38.267576 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:38.267586 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:38.267550 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0819 11:57:38.267643 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.267657 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	W0819 11:57:38.267698 2094815 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 11:57:38.268096 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.268099 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.268159 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.268291 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.268415 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.268727 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.268745 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.268807 2094815 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 11:57:38.269157 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.269169 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.269353 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.269558 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.269774 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.270016 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.270079 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.270099 2094815 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 11:57:38.270126 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 11:57:38.270149 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.270693 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.271527 2094815 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0819 11:57:38.272118 2094815 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 11:57:38.272730 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.272957 2094815 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0819 11:57:38.272976 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0819 11:57:38.272994 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.273810 2094815 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 11:57:38.273828 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 11:57:38.273845 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.274109 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.274470 2094815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:57:38.275176 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.275516 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.275551 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.275648 2094815 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 11:57:38.275796 2094815 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:57:38.275811 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 11:57:38.275828 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.275908 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.276132 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.276370 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.276567 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.278057 2094815 out.go:177]   - Using image docker.io/busybox:stable
	I0819 11:57:38.278060 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.278599 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.278636 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.278812 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.279011 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.279034 2094815 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 11:57:38.279047 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 11:57:38.279064 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.279207 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.279274 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.279353 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.279654 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.279979 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.280013 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.280512 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.280545 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.280556 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.280691 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.280861 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.280866 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.281156 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.281204 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.281376 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.281527 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.282567 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.282882 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.282912 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.283051 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.283202 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.283374 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.283499 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:38.289416 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39125
	I0819 11:57:38.289771 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:38.290427 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:38.290445 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:38.290842 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:38.291047 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:38.292615 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:38.292906 2094815 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 11:57:38.292921 2094815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 11:57:38.292939 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:38.296091 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.296570 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:38.296600 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:38.296813 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:38.296990 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:38.297132 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:38.297261 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	W0819 11:57:38.298172 2094815 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:51710->192.168.39.65:22: read: connection reset by peer
	I0819 11:57:38.298205 2094815 retry.go:31] will retry after 335.547232ms: ssh: handshake failed: read tcp 192.168.39.1:51710->192.168.39.65:22: read: connection reset by peer
	I0819 11:57:38.564068 2094815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:57:38.564283 2094815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 11:57:38.579191 2094815 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 11:57:38.579231 2094815 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 11:57:38.607420 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 11:57:38.674593 2094815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 11:57:38.674622 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 11:57:38.694992 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 11:57:38.708710 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 11:57:38.708746 2094815 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 11:57:38.732723 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 11:57:38.733816 2094815 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 11:57:38.733835 2094815 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 11:57:38.738073 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:57:38.740231 2094815 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 11:57:38.740249 2094815 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 11:57:38.749544 2094815 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 11:57:38.749564 2094815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 11:57:38.884025 2094815 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 11:57:38.884053 2094815 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 11:57:38.893025 2094815 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 11:57:38.893048 2094815 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 11:57:38.896853 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 11:57:38.899665 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 11:57:38.899689 2094815 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 11:57:38.910905 2094815 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0819 11:57:38.910925 2094815 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0819 11:57:38.932460 2094815 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 11:57:38.932481 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 11:57:38.933207 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 11:57:38.940727 2094815 node_ready.go:35] waiting up to 6m0s for node "addons-442145" to be "Ready" ...
	I0819 11:57:38.946122 2094815 node_ready.go:49] node "addons-442145" has status "Ready":"True"
	I0819 11:57:38.946156 2094815 node_ready.go:38] duration metric: took 5.390948ms for node "addons-442145" to be "Ready" ...
	I0819 11:57:38.946170 2094815 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:57:38.957712 2094815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace to be "Ready" ...
	I0819 11:57:39.019988 2094815 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 11:57:39.020030 2094815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 11:57:39.065454 2094815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 11:57:39.065492 2094815 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 11:57:39.127813 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 11:57:39.127849 2094815 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 11:57:39.145074 2094815 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 11:57:39.145104 2094815 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 11:57:39.159222 2094815 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 11:57:39.159251 2094815 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 11:57:39.166327 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 11:57:39.196932 2094815 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 11:57:39.196964 2094815 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0819 11:57:39.215941 2094815 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 11:57:39.215965 2094815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 11:57:39.249770 2094815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 11:57:39.249809 2094815 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 11:57:39.273120 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 11:57:39.273146 2094815 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 11:57:39.316286 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 11:57:39.337741 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 11:57:39.367919 2094815 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 11:57:39.367957 2094815 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 11:57:39.424287 2094815 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 11:57:39.424320 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 11:57:39.431274 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 11:57:39.448963 2094815 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 11:57:39.448997 2094815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 11:57:39.463034 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 11:57:39.463063 2094815 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 11:57:39.633323 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 11:57:39.633351 2094815 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 11:57:39.673127 2094815 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 11:57:39.673163 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 11:57:39.720672 2094815 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 11:57:39.720701 2094815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 11:57:39.731211 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 11:57:39.810511 2094815 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 11:57:39.810537 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 11:57:39.940902 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 11:57:39.973636 2094815 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 11:57:39.973661 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 11:57:40.077191 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 11:57:40.271200 2094815 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 11:57:40.271232 2094815 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 11:57:40.557600 2094815 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 11:57:40.557635 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 11:57:40.702057 2094815 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 11:57:40.702090 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 11:57:40.966368 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:40.984458 2094815 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.420132515s)
	I0819 11:57:40.984497 2094815 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 11:57:41.080803 2094815 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 11:57:41.080830 2094815 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 11:57:41.326414 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.71894541s)
	I0819 11:57:41.326467 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:41.326482 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:41.326798 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:41.326848 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:41.326867 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:41.326885 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:41.326896 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:41.327243 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:41.327304 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:41.327326 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:41.393897 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 11:57:41.536655 2094815 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-442145" context rescaled to 1 replicas
	I0819 11:57:42.991192 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:45.221824 2094815 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 11:57:45.221882 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:45.225476 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:45.225970 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:45.226006 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:45.226132 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:45.226411 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:45.226636 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:45.226809 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:45.465460 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:45.780289 2094815 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 11:57:45.867391 2094815 addons.go:234] Setting addon gcp-auth=true in "addons-442145"
	I0819 11:57:45.867479 2094815 host.go:66] Checking if "addons-442145" exists ...
	I0819 11:57:45.867987 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:45.868049 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:45.884933 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0819 11:57:45.885378 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:45.885962 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:45.885986 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:45.886299 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:45.886818 2094815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:57:45.886865 2094815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:57:45.902843 2094815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I0819 11:57:45.903305 2094815 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:57:45.903895 2094815 main.go:141] libmachine: Using API Version  1
	I0819 11:57:45.903928 2094815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:57:45.904330 2094815 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:57:45.904651 2094815 main.go:141] libmachine: (addons-442145) Calling .GetState
	I0819 11:57:45.906291 2094815 main.go:141] libmachine: (addons-442145) Calling .DriverName
	I0819 11:57:45.906575 2094815 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 11:57:45.906606 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHHostname
	I0819 11:57:45.909590 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:45.910082 2094815 main.go:141] libmachine: (addons-442145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f2:1a", ip: ""} in network mk-addons-442145: {Iface:virbr1 ExpiryTime:2024-08-19 12:57:07 +0000 UTC Type:0 Mac:52:54:00:9d:f2:1a Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-442145 Clientid:01:52:54:00:9d:f2:1a}
	I0819 11:57:45.910112 2094815 main.go:141] libmachine: (addons-442145) DBG | domain addons-442145 has defined IP address 192.168.39.65 and MAC address 52:54:00:9d:f2:1a in network mk-addons-442145
	I0819 11:57:45.910292 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHPort
	I0819 11:57:45.910477 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHKeyPath
	I0819 11:57:45.910694 2094815 main.go:141] libmachine: (addons-442145) Calling .GetSSHUsername
	I0819 11:57:45.910830 2094815 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/addons-442145/id_rsa Username:docker}
	I0819 11:57:47.598656 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:47.623655 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.928621766s)
	I0819 11:57:47.623701 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.890939743s)
	I0819 11:57:47.623719 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.623734 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.623752 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.623774 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.623781 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.885681935s)
	I0819 11:57:47.623821 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.623823 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.726942882s)
	I0819 11:57:47.623835 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.623844 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.623853 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.623896 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.690664231s)
	I0819 11:57:47.623923 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.623931 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.457576354s)
	I0819 11:57:47.623951 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.623964 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.623962 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.307649831s)
	I0819 11:57:47.623981 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.623991 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.623935 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.624065 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.286294779s)
	I0819 11:57:47.624087 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.624097 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.624271 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.624292 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.624302 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.624310 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.624437 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.624446 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.624453 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.624460 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.624468 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.624471 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.624477 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.624480 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.624596 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.624639 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.624675 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.624697 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.624713 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.624720 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.193414484s)
	I0819 11:57:47.624752 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.624762 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.624857 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.89361848s)
	I0819 11:57:47.624875 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.624883 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.625003 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.684071119s)
	W0819 11:57:47.625029 2094815 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 11:57:47.625057 2094815 retry.go:31] will retry after 194.789439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 11:57:47.625076 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.625125 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.625133 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.625140 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.625141 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.5479198s)
	I0819 11:57:47.625148 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.625156 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.625166 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.625221 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.625228 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.625236 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.625243 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.626363 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.626395 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.626402 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627076 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.627100 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627097 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.627118 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.627126 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627135 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.627142 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.627159 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627185 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627118 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627199 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627201 2094815 addons.go:475] Verifying addon registry=true in "addons-442145"
	I0819 11:57:47.627221 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.627228 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627329 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627352 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.627359 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627367 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.627374 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.627442 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627464 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.627471 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627689 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627700 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.627725 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627735 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.627741 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627749 2094815 addons.go:475] Verifying addon metrics-server=true in "addons-442145"
	I0819 11:57:47.627125 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.627934 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.627952 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.628055 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.628079 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.628102 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.628109 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.628766 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.628785 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.628794 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.628802 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.629272 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.629307 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.629314 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.630239 2094815 out.go:177] * Verifying registry addon...
	I0819 11:57:47.630951 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.630969 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.631286 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.631337 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.631348 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.631357 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.631365 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.631372 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.631381 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.631388 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.632684 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:47.632718 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.632725 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.633174 2094815 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-442145 service yakd-dashboard -n yakd-dashboard
	
	I0819 11:57:47.627312 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.633296 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.633308 2094815 addons.go:475] Verifying addon ingress=true in "addons-442145"
	I0819 11:57:47.633536 2094815 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 11:57:47.634881 2094815 out.go:177] * Verifying ingress addon...
	I0819 11:57:47.636648 2094815 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 11:57:47.649138 2094815 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 11:57:47.649165 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:47.653339 2094815 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 11:57:47.653360 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:47.658324 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.658350 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.658772 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.658791 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	W0819 11:57:47.658885 2094815 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0819 11:57:47.665021 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:47.665044 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:47.665360 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:47.665378 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:47.820325 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 11:57:48.166204 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:48.166652 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:48.344018 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.950048961s)
	I0819 11:57:48.344081 2094815 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.437459649s)
	I0819 11:57:48.344084 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:48.344208 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:48.344567 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:48.344574 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:48.344585 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:48.344595 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:48.344604 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:48.344838 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:48.344856 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:48.344860 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:48.344884 2094815 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-442145"
	I0819 11:57:48.345577 2094815 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 11:57:48.346461 2094815 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 11:57:48.347603 2094815 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 11:57:48.348382 2094815 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 11:57:48.348598 2094815 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 11:57:48.348620 2094815 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 11:57:48.378480 2094815 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 11:57:48.378517 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:48.580069 2094815 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 11:57:48.580102 2094815 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 11:57:48.637966 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:48.642188 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:48.655162 2094815 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 11:57:48.655188 2094815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 11:57:48.774774 2094815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 11:57:48.853534 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:49.140047 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:49.240327 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:49.353142 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:49.637006 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:49.639846 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:49.853086 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:49.964136 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:50.145614 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:50.146017 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:50.329243 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.508853221s)
	I0819 11:57:50.329321 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:50.329340 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:50.329756 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:50.329786 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:50.329797 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:50.329806 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:50.329766 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:50.330093 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:50.330113 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:50.330134 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:50.364522 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:50.584208 2094815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.809381965s)
	I0819 11:57:50.584279 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:50.584297 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:50.584674 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:50.584698 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:50.584708 2094815 main.go:141] libmachine: Making call to close driver server
	I0819 11:57:50.584718 2094815 main.go:141] libmachine: (addons-442145) Calling .Close
	I0819 11:57:50.584723 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:50.584980 2094815 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:57:50.585015 2094815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:57:50.585048 2094815 main.go:141] libmachine: (addons-442145) DBG | Closing plugin on server side
	I0819 11:57:50.586908 2094815 addons.go:475] Verifying addon gcp-auth=true in "addons-442145"
	I0819 11:57:50.588489 2094815 out.go:177] * Verifying gcp-auth addon...
	I0819 11:57:50.590190 2094815 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 11:57:50.609746 2094815 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 11:57:50.609769 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:50.640527 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:50.643751 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:50.858111 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:51.093986 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:51.138443 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:51.143223 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:51.352789 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:51.594043 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:51.637071 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:51.640320 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:51.853455 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:52.093947 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:52.137867 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:52.140323 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:52.353717 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:52.464245 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:52.594299 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:52.695908 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:52.696258 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:52.853458 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:53.095533 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:53.136865 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:53.140325 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:53.353233 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:53.593604 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:53.638307 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:53.642048 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:53.853475 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:54.093993 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:54.136907 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:54.140068 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:54.491105 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:54.495246 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:54.594121 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:54.636797 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:54.639578 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:54.857063 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:55.094735 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:55.141055 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:55.143270 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:55.352924 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:55.594118 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:55.637064 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:55.641668 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:55.854374 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:55.963336 2094815 pod_ready.go:98] pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.65 HostIPs:[{IP:192.168.39.
65}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-19 11:57:38 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-19 11:57:43 +0000 UTC,FinishedAt:2024-08-19 11:57:53 +0000 UTC,ContainerID:cri-o://a35c8d14866a8f8298c448a5ee1a23aa7269b037856b07b0680d73586a6cc927,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://a35c8d14866a8f8298c448a5ee1a23aa7269b037856b07b0680d73586a6cc927 Started:0xc0018b53d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0020ef1c0} {Name:kube-api-access-jq8lx MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0020ef1d0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0819 11:57:55.963371 2094815 pod_ready.go:82] duration metric: took 17.005625305s for pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace to be "Ready" ...
	E0819 11:57:55.963386 2094815 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-cdsdp" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 11:57:38 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.65 HostIPs:[{IP:192.168.39.65}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-19 11:57:38 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-19 11:57:43 +0000 UTC,FinishedAt:2024-08-19 11:57:53 +0000 UTC,ContainerID:cri-o://a35c8d14866a8f8298c448a5ee1a23aa7269b037856b07b0680d73586a6cc927,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://a35c8d14866a8f8298c448a5ee1a23aa7269b037856b07b0680d73586a6cc927 Started:0xc0018b53d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0020ef1c0} {Name:kube-api-access-jq8lx MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0020ef1d0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0819 11:57:55.963395 2094815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace to be "Ready" ...
	I0819 11:57:56.094411 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:56.138077 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:56.140319 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:56.353796 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:56.597824 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:56.637894 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:56.640254 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:56.852869 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:57.094099 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:57.136601 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:57.140152 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:57.353148 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:57.594173 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:57.637534 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:57.640816 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:57.853849 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:57.971045 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:57:58.094801 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:58.139940 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:58.141737 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:58.354906 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:58.775818 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:58.775940 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:58.776498 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:58.853083 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:59.093939 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:59.137209 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:59.140022 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:59.352743 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:59.593714 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:57:59.637495 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:57:59.639551 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:57:59.853767 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:57:59.971309 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:00.094542 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:00.137874 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:00.140603 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:00.353302 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:00.594446 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:00.637970 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:00.640276 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:00.853003 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:01.094294 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:01.137373 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:01.140748 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:01.353486 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:01.594402 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:01.637831 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:01.640110 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:01.855495 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:02.093852 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:02.137989 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:02.140352 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:02.352868 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:02.470936 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:02.594358 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:03.025727 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:03.026100 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:03.027289 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:03.094118 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:03.138571 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:03.141373 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:03.355523 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:03.594421 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:03.637439 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:03.640091 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:03.861158 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:04.094272 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:04.137399 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:04.139673 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:04.353341 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:04.594353 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:04.637299 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:04.641121 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:04.853728 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:04.971380 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:05.095448 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:05.139595 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:05.141445 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:05.353113 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:05.593290 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:05.637181 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:05.640330 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:05.853264 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:06.095553 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:06.137146 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:06.140379 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:06.352998 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:06.878222 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:06.878271 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:06.879690 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:06.881838 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:07.094818 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:07.138110 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:07.140770 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:07.353454 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:07.470785 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:07.594318 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:07.637406 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:07.639617 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:07.853013 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:08.094014 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:08.137894 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:08.144517 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:08.353977 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:08.593933 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:08.638220 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:08.640851 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:08.854277 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:09.099197 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:09.136953 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:09.140127 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:09.354238 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:09.594242 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:09.636998 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:09.640153 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:09.853396 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:09.970028 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:10.094591 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:10.137566 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:10.139803 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:10.353216 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:10.593776 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:10.638093 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:10.641061 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:10.854121 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:11.541221 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:11.541368 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:11.542525 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:11.543386 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:11.593710 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:11.637591 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:11.639939 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:11.852964 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:11.971656 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:12.093938 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:12.137117 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:12.140775 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:12.352699 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:12.593726 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:12.637535 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:12.639864 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:12.853713 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:13.094088 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:13.137378 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:13.139555 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:13.352916 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:13.594273 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:13.637023 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:13.640823 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:13.853692 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:13.974025 2094815 pod_ready.go:103] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"False"
	I0819 11:58:14.094605 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:14.138158 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:14.140899 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:14.353923 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:14.594546 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:14.637808 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:14.640733 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:14.853690 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:15.096280 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:15.137918 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:15.140696 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:15.353035 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:15.594558 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:15.637694 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:15.640463 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:15.853959 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:15.969911 2094815 pod_ready.go:93] pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:15.969942 2094815 pod_ready.go:82] duration metric: took 20.006535331s for pod "coredns-6f6b679f8f-tn2xl" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.969954 2094815 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.978120 2094815 pod_ready.go:93] pod "etcd-addons-442145" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:15.978147 2094815 pod_ready.go:82] duration metric: took 8.186119ms for pod "etcd-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.978160 2094815 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.982466 2094815 pod_ready.go:93] pod "kube-apiserver-addons-442145" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:15.982496 2094815 pod_ready.go:82] duration metric: took 4.327594ms for pod "kube-apiserver-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.982510 2094815 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.986599 2094815 pod_ready.go:93] pod "kube-controller-manager-addons-442145" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:15.986618 2094815 pod_ready.go:82] duration metric: took 4.101224ms for pod "kube-controller-manager-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.986627 2094815 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rpzwc" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.991490 2094815 pod_ready.go:93] pod "kube-proxy-rpzwc" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:15.991512 2094815 pod_ready.go:82] duration metric: took 4.879831ms for pod "kube-proxy-rpzwc" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:15.991520 2094815 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:16.093912 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:16.136646 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:16.140089 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:16.354308 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:16.368601 2094815 pod_ready.go:93] pod "kube-scheduler-addons-442145" in "kube-system" namespace has status "Ready":"True"
	I0819 11:58:16.368629 2094815 pod_ready.go:82] duration metric: took 377.100554ms for pod "kube-scheduler-addons-442145" in "kube-system" namespace to be "Ready" ...
	I0819 11:58:16.368639 2094815 pod_ready.go:39] duration metric: took 37.422453666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:58:16.368662 2094815 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:58:16.368747 2094815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:58:16.460514 2094815 api_server.go:72] duration metric: took 38.353055398s to wait for apiserver process to appear ...
	I0819 11:58:16.460552 2094815 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:58:16.460577 2094815 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0819 11:58:16.466907 2094815 api_server.go:279] https://192.168.39.65:8443/healthz returned 200:
	ok
	I0819 11:58:16.467997 2094815 api_server.go:141] control plane version: v1.31.0
	I0819 11:58:16.468026 2094815 api_server.go:131] duration metric: took 7.465803ms to wait for apiserver health ...
	I0819 11:58:16.468037 2094815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 11:58:16.573081 2094815 system_pods.go:59] 18 kube-system pods found
	I0819 11:58:16.573118 2094815 system_pods.go:61] "coredns-6f6b679f8f-tn2xl" [0ffe02ab-9e5f-4a7a-80c5-e214a48d181e] Running
	I0819 11:58:16.573129 2094815 system_pods.go:61] "csi-hostpath-attacher-0" [4122db6b-7f03-45b6-89af-8209c1a35139] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 11:58:16.573137 2094815 system_pods.go:61] "csi-hostpath-resizer-0" [630e5c60-6c07-4f54-b32b-be622b85793c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 11:58:16.573149 2094815 system_pods.go:61] "csi-hostpathplugin-gt5g4" [0f9a92dd-39ae-422b-8da7-0150875f812f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 11:58:16.573155 2094815 system_pods.go:61] "etcd-addons-442145" [79fa472f-51e6-454e-b9de-841cf410f533] Running
	I0819 11:58:16.573162 2094815 system_pods.go:61] "kube-apiserver-addons-442145" [898b56ec-651a-47e3-bd20-752d4e558c84] Running
	I0819 11:58:16.573169 2094815 system_pods.go:61] "kube-controller-manager-addons-442145" [163525c5-c7fd-42db-a25d-ab69549efbcb] Running
	I0819 11:58:16.573179 2094815 system_pods.go:61] "kube-ingress-dns-minikube" [fc2cbf6d-d587-477e-ba35-4d0322e45461] Running
	I0819 11:58:16.573187 2094815 system_pods.go:61] "kube-proxy-rpzwc" [f58e40e6-11d0-415a-a1d4-96237cba48e7] Running
	I0819 11:58:16.573196 2094815 system_pods.go:61] "kube-scheduler-addons-442145" [3cc958d7-4f5d-4e0e-b28a-d7e289cf35cd] Running
	I0819 11:58:16.573203 2094815 system_pods.go:61] "metrics-server-8988944d9-6pvxf" [19869c82-91f3-46fc-b815-fa4ff03b5433] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 11:58:16.573212 2094815 system_pods.go:61] "nvidia-device-plugin-daemonset-mtvp4" [94c95122-aa0e-4340-8546-8a6b1539fa00] Running
	I0819 11:58:16.573222 2094815 system_pods.go:61] "registry-6fb4cdfc84-dqqsf" [2c0f4d4e-c475-4bda-a9c5-8958f5c9f2f7] Running
	I0819 11:58:16.573230 2094815 system_pods.go:61] "registry-proxy-kfv85" [56d07c49-4b74-4b80-94c5-73b9277814b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 11:58:16.573243 2094815 system_pods.go:61] "snapshot-controller-56fcc65765-hlm68" [1b328062-23c0-468b-a880-b428645ecba3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 11:58:16.573254 2094815 system_pods.go:61] "snapshot-controller-56fcc65765-jmhhj" [686b9a12-4c81-4594-992c-42a552cac282] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 11:58:16.573261 2094815 system_pods.go:61] "storage-provisioner" [62a5e8a3-5947-4048-92af-651cde360d04] Running
	I0819 11:58:16.573276 2094815 system_pods.go:61] "tiller-deploy-b48cc5f79-97zl7" [246084b2-d891-47a5-b87d-17cbe04e45e7] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0819 11:58:16.573287 2094815 system_pods.go:74] duration metric: took 105.242799ms to wait for pod list to return data ...
	I0819 11:58:16.573301 2094815 default_sa.go:34] waiting for default service account to be created ...
	I0819 11:58:16.593396 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:16.637188 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:16.639999 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:16.767365 2094815 default_sa.go:45] found service account: "default"
	I0819 11:58:16.767395 2094815 default_sa.go:55] duration metric: took 194.084902ms for default service account to be created ...
	I0819 11:58:16.767405 2094815 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 11:58:16.853120 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:16.973876 2094815 system_pods.go:86] 18 kube-system pods found
	I0819 11:58:16.973911 2094815 system_pods.go:89] "coredns-6f6b679f8f-tn2xl" [0ffe02ab-9e5f-4a7a-80c5-e214a48d181e] Running
	I0819 11:58:16.973920 2094815 system_pods.go:89] "csi-hostpath-attacher-0" [4122db6b-7f03-45b6-89af-8209c1a35139] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 11:58:16.973926 2094815 system_pods.go:89] "csi-hostpath-resizer-0" [630e5c60-6c07-4f54-b32b-be622b85793c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 11:58:16.973934 2094815 system_pods.go:89] "csi-hostpathplugin-gt5g4" [0f9a92dd-39ae-422b-8da7-0150875f812f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 11:58:16.973940 2094815 system_pods.go:89] "etcd-addons-442145" [79fa472f-51e6-454e-b9de-841cf410f533] Running
	I0819 11:58:16.973947 2094815 system_pods.go:89] "kube-apiserver-addons-442145" [898b56ec-651a-47e3-bd20-752d4e558c84] Running
	I0819 11:58:16.973952 2094815 system_pods.go:89] "kube-controller-manager-addons-442145" [163525c5-c7fd-42db-a25d-ab69549efbcb] Running
	I0819 11:58:16.973964 2094815 system_pods.go:89] "kube-ingress-dns-minikube" [fc2cbf6d-d587-477e-ba35-4d0322e45461] Running
	I0819 11:58:16.973969 2094815 system_pods.go:89] "kube-proxy-rpzwc" [f58e40e6-11d0-415a-a1d4-96237cba48e7] Running
	I0819 11:58:16.973975 2094815 system_pods.go:89] "kube-scheduler-addons-442145" [3cc958d7-4f5d-4e0e-b28a-d7e289cf35cd] Running
	I0819 11:58:16.973982 2094815 system_pods.go:89] "metrics-server-8988944d9-6pvxf" [19869c82-91f3-46fc-b815-fa4ff03b5433] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 11:58:16.973990 2094815 system_pods.go:89] "nvidia-device-plugin-daemonset-mtvp4" [94c95122-aa0e-4340-8546-8a6b1539fa00] Running
	I0819 11:58:16.973994 2094815 system_pods.go:89] "registry-6fb4cdfc84-dqqsf" [2c0f4d4e-c475-4bda-a9c5-8958f5c9f2f7] Running
	I0819 11:58:16.973999 2094815 system_pods.go:89] "registry-proxy-kfv85" [56d07c49-4b74-4b80-94c5-73b9277814b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 11:58:16.974006 2094815 system_pods.go:89] "snapshot-controller-56fcc65765-hlm68" [1b328062-23c0-468b-a880-b428645ecba3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 11:58:16.974011 2094815 system_pods.go:89] "snapshot-controller-56fcc65765-jmhhj" [686b9a12-4c81-4594-992c-42a552cac282] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 11:58:16.974016 2094815 system_pods.go:89] "storage-provisioner" [62a5e8a3-5947-4048-92af-651cde360d04] Running
	I0819 11:58:16.974021 2094815 system_pods.go:89] "tiller-deploy-b48cc5f79-97zl7" [246084b2-d891-47a5-b87d-17cbe04e45e7] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0819 11:58:16.974029 2094815 system_pods.go:126] duration metric: took 206.617837ms to wait for k8s-apps to be running ...
	I0819 11:58:16.974036 2094815 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 11:58:16.974093 2094815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:58:17.009363 2094815 system_svc.go:56] duration metric: took 35.314678ms WaitForService to wait for kubelet
	I0819 11:58:17.009399 2094815 kubeadm.go:582] duration metric: took 38.901946923s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:58:17.009430 2094815 node_conditions.go:102] verifying NodePressure condition ...
	I0819 11:58:17.094576 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:17.137918 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:17.140418 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:17.169066 2094815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 11:58:17.169110 2094815 node_conditions.go:123] node cpu capacity is 2
	I0819 11:58:17.169128 2094815 node_conditions.go:105] duration metric: took 159.690252ms to run NodePressure ...
	I0819 11:58:17.169145 2094815 start.go:241] waiting for startup goroutines ...
	I0819 11:58:17.354545 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:17.593984 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:17.638284 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:17.642853 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:17.853675 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:18.094105 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:18.137111 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:18.140537 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:18.353635 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:18.594646 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:18.637480 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:18.639574 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:18.853608 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:19.095960 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:19.197171 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:19.197789 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:19.354262 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:19.594039 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:19.637124 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:58:19.640461 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:19.853507 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:20.404943 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:20.405724 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:20.406428 2094815 kapi.go:107] duration metric: took 32.772889386s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 11:58:20.407477 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:20.594563 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:20.640730 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:20.853361 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:21.094323 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:21.140930 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:21.355131 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:21.600519 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:21.640854 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:21.854798 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:22.096097 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:22.144894 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:22.354624 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:22.594801 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:22.697342 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:22.853743 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:23.093505 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:23.149752 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:23.353821 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:23.595739 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:23.643907 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:23.855945 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:24.094034 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:24.143131 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:24.353394 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:24.594471 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:24.649890 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:24.858760 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:25.461887 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:25.462514 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:25.462566 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:25.594065 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:25.695544 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:25.853178 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:26.093451 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:26.140784 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:26.353123 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:26.594008 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:26.642652 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:26.853172 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:27.093789 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:27.140957 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:27.354037 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:27.595509 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:27.696482 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:27.852691 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:28.094782 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:28.141522 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:28.353550 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:28.594353 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:28.641095 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:28.852895 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:29.096189 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:29.143004 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:29.354218 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:29.594184 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:29.696213 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:29.852223 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:30.094934 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:30.142405 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:30.353579 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:30.594110 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:30.641369 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:30.853631 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:31.096402 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:31.140599 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:31.353152 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:31.593504 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:31.641601 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:31.852929 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:32.093870 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:32.141428 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:32.358430 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:32.599881 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:32.701750 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:32.853785 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:33.094736 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:33.141669 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:33.353736 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:34.164735 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:34.165596 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:34.173271 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:34.180264 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:34.192116 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:34.353327 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:34.596010 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:34.640526 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:34.853651 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:35.094182 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:35.140898 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:35.353604 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:35.594564 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:35.640870 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:35.857021 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:36.097832 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:36.203288 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:36.352813 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:36.594595 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:36.649973 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:36.853438 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:37.096147 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:37.143887 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:37.353384 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:37.594391 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:37.641092 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:37.853567 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:38.100919 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:38.140623 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:38.353435 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:38.595056 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:38.640638 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:38.853470 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:39.094891 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:39.141419 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:39.355393 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:39.594840 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:39.641167 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:39.852774 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:40.095041 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:40.141644 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:40.352888 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:40.594439 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:40.640818 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:40.853968 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:41.094665 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:41.141129 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:41.353032 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:41.595156 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:41.641369 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:41.853102 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:42.094483 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:42.141271 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:42.352991 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:42.594736 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:42.696049 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:42.854041 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:43.094837 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:43.142032 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:43.353744 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:43.595396 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:43.696975 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:43.854135 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:58:44.095311 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:44.141414 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:44.353704 2094815 kapi.go:107] duration metric: took 56.005316428s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 11:58:44.594889 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:44.641668 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:45.094232 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:45.141101 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:45.594310 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:45.640845 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:46.093354 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:46.140749 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:46.593687 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:46.645357 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:47.094495 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:47.141107 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:47.594101 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:47.640816 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:48.095083 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:48.140416 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:48.594936 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:48.641389 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:49.095009 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:49.140222 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:49.594634 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:49.641500 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:50.094377 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:50.141526 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:50.594046 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:50.640398 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:51.094382 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:51.142823 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:51.597805 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:51.641727 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:52.093484 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:52.141122 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:52.594564 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:52.641873 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:53.093427 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:53.141716 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:53.594394 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:53.641062 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:54.094178 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:54.140681 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:54.593990 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:54.640387 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:55.417223 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:55.421240 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:55.594573 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:55.640739 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:56.094157 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:56.140593 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:56.593872 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:56.643400 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:57.094625 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:57.141149 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:57.594805 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:57.642192 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:58.093807 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:58.141294 2094815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:58:58.602225 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:58.643932 2094815 kapi.go:107] duration metric: took 1m11.007282094s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 11:58:59.102748 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:58:59.596788 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:00.102758 2094815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:59:00.602957 2094815 kapi.go:107] duration metric: took 1m10.012763218s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 11:59:00.604208 2094815 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-442145 cluster.
	I0819 11:59:00.605407 2094815 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 11:59:00.606528 2094815 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 11:59:00.607773 2094815 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, metrics-server, nvidia-device-plugin, helm-tiller, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0819 11:59:00.609006 2094815 addons.go:510] duration metric: took 1m22.501496602s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns metrics-server nvidia-device-plugin helm-tiller inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0819 11:59:00.609084 2094815 start.go:246] waiting for cluster config update ...
	I0819 11:59:00.609113 2094815 start.go:255] writing updated cluster config ...
	I0819 11:59:00.609484 2094815 ssh_runner.go:195] Run: rm -f paused
	I0819 11:59:00.661156 2094815 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 11:59:00.662835 2094815 out.go:177] * Done! kubectl is now configured to use "addons-442145" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.204594551Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=455377f6-9ac8-48a2-bf27-70de5bf357da name=/runtime.v1.RuntimeService/Version
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.205675221Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae8cfd4f-2682-488c-9e6e-f9f236f06e21 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.206903683Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069097206873795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae8cfd4f-2682-488c-9e6e-f9f236f06e21 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.207664421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b67cd1e0-f12a-4198-8d74-ed0f60c69d45 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.207716582Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b67cd1e0-f12a-4198-8d74-ed0f60c69d45 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.210723652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a3367af52e1de9c99d1d9007f2f0d6eb293ccda3680455409baaa87ecabd017,PodSandboxId:1bc402b81fe5d458f728107331afb61d4a8f52b9ac6aee03f5df3dfd7e558403,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724068935000085416,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qnwt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6391c36c-7b77-4f53-8ee0-cb3bc71b5fb5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184c68c6e0f298f91f63b1eff3858b144fc02c728b578023e974c0f0735a9f1a,PodSandboxId:bc503bdf726dccf0d706dab1775e487f367552ff2e44a3887decfa09b53dbddc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724068795462759078,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bfb1b5c-61d0-4012-be13-db86ac5187f6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eaebe7e8a45474c7befe65b1d83fc8496d65e4e69a7670e1bbfb5f32692a1f8,PodSandboxId:0b05ccb41acf1463d69f0ba979b166924a2020e1c2a2a37ae7a4cc6f69a62901,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724068743769117881,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d584d27-474c-49b5-a
b65-6b144518d97b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d6d02f54a98908ec6f45d80711cdf3490f324e830028d14c9ee6693df40c83,PodSandboxId:c2ae895d3ac20bff7e5c663f4b46b77bffabb2f68a4add8b3ed9a66c7dab3885,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724068703464795209,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-6pvxf,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 19869c82-91f3-46fc-b815-fa4ff03b5433,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6254752d6998253fa47eead78cbe0e1514cdf304ccb3a52421bfe650a723734b,PodSandboxId:5a5d7317b47637ebba50faf9e0e92867b187eebc6f0fc83c1fce973f049355e3,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724068692879559122,Labels:map[string]string
{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-mbpxv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ef0cf298-0354-4a2a-be6d-02396dc53b01,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82b35f9f0260bb94eed8da803708e5a8bcf4e2a0139f2aaab30653e6c10551,PodSandboxId:b9b34dc26097f4dc375901adc7dff37ef1583ef17e7cdf7a3fdf386b29295b0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724068664806319452,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62a5e8a3-5947-4048-92af-651cde360d04,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e962e7b4514778fe56d55b14caf1ec77ec713b9a9d62b25d2d19c49851987e,PodSandboxId:6db0b82d8669c4eb85c5a9f7ec2a3b5822d92b43199efb95979dee576507f425,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724068662298469011,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tn2xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffe02ab-9e5f-4a7a-80c5-e214a48d181e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201fc6e23ba9a5c4eb11c2f9cf4447c01e3c9afda6c8115421dab1ffff9b71ec,PodSandboxId:63129ec0d0012c821f8ea28f54198d6e8241075007e58f29456105397eb4143e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724068659954672034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rpzwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f58e40e6-11d0-415a-a1d4-96237cba48e7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e89f25b767633cfed93e529e540d5bafb5fbacb44e3dc7a21ef6f9742909f2,PodSandboxId:1f381cb328e5be82b728b1514daa8996e368391019d9df453223a7a1082c5238,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724068648802614329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b2ab71ac6bdb1790527ad33531f67bb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a1084dc30582bb85824ed05e3914b6636b95d7bdaf5f86a0c84e2d6487da08,PodSandboxId:9492061e9db4c65f58eddf8a62389f1f873ffac8873f03aa02d1c0cc82a2ed45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e67414
62f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724068648771980537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedbd85d441e3e176dabeeab6fe5167a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6da52004917069ff1a6f3951bcf805ea3b922d5a4b4a01d584c1ae4353ea4a2,PodSandboxId:71d2a964740076660a0312a496bd256bb0c454e06b5f1a6494bc4b9c97a3b673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babc
a2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724068648790338401,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4b177f9a48b550e434d5193f4cd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0fd6bc35b7149a82129e12c8bfd32cdc28aa80b4906cde08861b6f07cd9365,PodSandboxId:94722e7e7accd80402bbfcf861800d95ee77c3f09dbe49410946d2dc4ae5fe44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7
a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724068648681924705,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73b959efa116aa64d205ed9688f57c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b67cd1e0-f12a-4198-8d74-ed0f60c69d45 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.254696977Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ce7a3ee-35c4-4685-a95c-5f7bd6fd62f9 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.254814901Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ce7a3ee-35c4-4685-a95c-5f7bd6fd62f9 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.255945249Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0a23369-6b90-4960-bf55-2415458b190f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.257535568Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069097257509462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0a23369-6b90-4960-bf55-2415458b190f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.258087980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6438e5c3-2168-4a3e-8ade-0a3ded056ad3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.258154984Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6438e5c3-2168-4a3e-8ade-0a3ded056ad3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.258515400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a3367af52e1de9c99d1d9007f2f0d6eb293ccda3680455409baaa87ecabd017,PodSandboxId:1bc402b81fe5d458f728107331afb61d4a8f52b9ac6aee03f5df3dfd7e558403,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724068935000085416,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qnwt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6391c36c-7b77-4f53-8ee0-cb3bc71b5fb5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184c68c6e0f298f91f63b1eff3858b144fc02c728b578023e974c0f0735a9f1a,PodSandboxId:bc503bdf726dccf0d706dab1775e487f367552ff2e44a3887decfa09b53dbddc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724068795462759078,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bfb1b5c-61d0-4012-be13-db86ac5187f6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eaebe7e8a45474c7befe65b1d83fc8496d65e4e69a7670e1bbfb5f32692a1f8,PodSandboxId:0b05ccb41acf1463d69f0ba979b166924a2020e1c2a2a37ae7a4cc6f69a62901,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724068743769117881,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d584d27-474c-49b5-a
b65-6b144518d97b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d6d02f54a98908ec6f45d80711cdf3490f324e830028d14c9ee6693df40c83,PodSandboxId:c2ae895d3ac20bff7e5c663f4b46b77bffabb2f68a4add8b3ed9a66c7dab3885,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724068703464795209,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-6pvxf,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 19869c82-91f3-46fc-b815-fa4ff03b5433,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6254752d6998253fa47eead78cbe0e1514cdf304ccb3a52421bfe650a723734b,PodSandboxId:5a5d7317b47637ebba50faf9e0e92867b187eebc6f0fc83c1fce973f049355e3,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724068692879559122,Labels:map[string]string
{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-mbpxv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ef0cf298-0354-4a2a-be6d-02396dc53b01,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82b35f9f0260bb94eed8da803708e5a8bcf4e2a0139f2aaab30653e6c10551,PodSandboxId:b9b34dc26097f4dc375901adc7dff37ef1583ef17e7cdf7a3fdf386b29295b0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724068664806319452,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62a5e8a3-5947-4048-92af-651cde360d04,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e962e7b4514778fe56d55b14caf1ec77ec713b9a9d62b25d2d19c49851987e,PodSandboxId:6db0b82d8669c4eb85c5a9f7ec2a3b5822d92b43199efb95979dee576507f425,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724068662298469011,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tn2xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffe02ab-9e5f-4a7a-80c5-e214a48d181e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201fc6e23ba9a5c4eb11c2f9cf4447c01e3c9afda6c8115421dab1ffff9b71ec,PodSandboxId:63129ec0d0012c821f8ea28f54198d6e8241075007e58f29456105397eb4143e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724068659954672034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rpzwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f58e40e6-11d0-415a-a1d4-96237cba48e7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e89f25b767633cfed93e529e540d5bafb5fbacb44e3dc7a21ef6f9742909f2,PodSandboxId:1f381cb328e5be82b728b1514daa8996e368391019d9df453223a7a1082c5238,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724068648802614329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b2ab71ac6bdb1790527ad33531f67bb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a1084dc30582bb85824ed05e3914b6636b95d7bdaf5f86a0c84e2d6487da08,PodSandboxId:9492061e9db4c65f58eddf8a62389f1f873ffac8873f03aa02d1c0cc82a2ed45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e67414
62f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724068648771980537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedbd85d441e3e176dabeeab6fe5167a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6da52004917069ff1a6f3951bcf805ea3b922d5a4b4a01d584c1ae4353ea4a2,PodSandboxId:71d2a964740076660a0312a496bd256bb0c454e06b5f1a6494bc4b9c97a3b673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babc
a2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724068648790338401,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4b177f9a48b550e434d5193f4cd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0fd6bc35b7149a82129e12c8bfd32cdc28aa80b4906cde08861b6f07cd9365,PodSandboxId:94722e7e7accd80402bbfcf861800d95ee77c3f09dbe49410946d2dc4ae5fe44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7
a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724068648681924705,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73b959efa116aa64d205ed9688f57c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6438e5c3-2168-4a3e-8ade-0a3ded056ad3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.291729968Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2693d2b9-34b3-41d8-9144-7cd19462d186 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.291817383Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2693d2b9-34b3-41d8-9144-7cd19462d186 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.293456739Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7dcad06-468d-4774-97da-19babeabf9ed name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.294896274Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069097294872556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7dcad06-468d-4774-97da-19babeabf9ed name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.295531317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2847615-c10a-4c95-9ff5-87a61fff95f8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.295582139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2847615-c10a-4c95-9ff5-87a61fff95f8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.295832259Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a3367af52e1de9c99d1d9007f2f0d6eb293ccda3680455409baaa87ecabd017,PodSandboxId:1bc402b81fe5d458f728107331afb61d4a8f52b9ac6aee03f5df3dfd7e558403,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724068935000085416,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qnwt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6391c36c-7b77-4f53-8ee0-cb3bc71b5fb5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184c68c6e0f298f91f63b1eff3858b144fc02c728b578023e974c0f0735a9f1a,PodSandboxId:bc503bdf726dccf0d706dab1775e487f367552ff2e44a3887decfa09b53dbddc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724068795462759078,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bfb1b5c-61d0-4012-be13-db86ac5187f6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eaebe7e8a45474c7befe65b1d83fc8496d65e4e69a7670e1bbfb5f32692a1f8,PodSandboxId:0b05ccb41acf1463d69f0ba979b166924a2020e1c2a2a37ae7a4cc6f69a62901,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724068743769117881,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d584d27-474c-49b5-a
b65-6b144518d97b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d6d02f54a98908ec6f45d80711cdf3490f324e830028d14c9ee6693df40c83,PodSandboxId:c2ae895d3ac20bff7e5c663f4b46b77bffabb2f68a4add8b3ed9a66c7dab3885,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724068703464795209,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-6pvxf,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 19869c82-91f3-46fc-b815-fa4ff03b5433,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6254752d6998253fa47eead78cbe0e1514cdf304ccb3a52421bfe650a723734b,PodSandboxId:5a5d7317b47637ebba50faf9e0e92867b187eebc6f0fc83c1fce973f049355e3,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724068692879559122,Labels:map[string]string
{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-mbpxv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ef0cf298-0354-4a2a-be6d-02396dc53b01,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82b35f9f0260bb94eed8da803708e5a8bcf4e2a0139f2aaab30653e6c10551,PodSandboxId:b9b34dc26097f4dc375901adc7dff37ef1583ef17e7cdf7a3fdf386b29295b0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724068664806319452,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62a5e8a3-5947-4048-92af-651cde360d04,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e962e7b4514778fe56d55b14caf1ec77ec713b9a9d62b25d2d19c49851987e,PodSandboxId:6db0b82d8669c4eb85c5a9f7ec2a3b5822d92b43199efb95979dee576507f425,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724068662298469011,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tn2xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffe02ab-9e5f-4a7a-80c5-e214a48d181e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201fc6e23ba9a5c4eb11c2f9cf4447c01e3c9afda6c8115421dab1ffff9b71ec,PodSandboxId:63129ec0d0012c821f8ea28f54198d6e8241075007e58f29456105397eb4143e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724068659954672034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rpzwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f58e40e6-11d0-415a-a1d4-96237cba48e7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e89f25b767633cfed93e529e540d5bafb5fbacb44e3dc7a21ef6f9742909f2,PodSandboxId:1f381cb328e5be82b728b1514daa8996e368391019d9df453223a7a1082c5238,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724068648802614329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b2ab71ac6bdb1790527ad33531f67bb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a1084dc30582bb85824ed05e3914b6636b95d7bdaf5f86a0c84e2d6487da08,PodSandboxId:9492061e9db4c65f58eddf8a62389f1f873ffac8873f03aa02d1c0cc82a2ed45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e67414
62f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724068648771980537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedbd85d441e3e176dabeeab6fe5167a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6da52004917069ff1a6f3951bcf805ea3b922d5a4b4a01d584c1ae4353ea4a2,PodSandboxId:71d2a964740076660a0312a496bd256bb0c454e06b5f1a6494bc4b9c97a3b673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babc
a2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724068648790338401,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4b177f9a48b550e434d5193f4cd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0fd6bc35b7149a82129e12c8bfd32cdc28aa80b4906cde08861b6f07cd9365,PodSandboxId:94722e7e7accd80402bbfcf861800d95ee77c3f09dbe49410946d2dc4ae5fe44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7
a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724068648681924705,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73b959efa116aa64d205ed9688f57c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2847615-c10a-4c95-9ff5-87a61fff95f8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.299155398Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=531e62c6-9944-4acc-8a1d-e68855512d74 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.299456283Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1bc402b81fe5d458f728107331afb61d4a8f52b9ac6aee03f5df3dfd7e558403,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-qnwt7,Uid:6391c36c-7b77-4f53-8ee0-cb3bc71b5fb5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724068933948200507,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qnwt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6391c36c-7b77-4f53-8ee0-cb3bc71b5fb5,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T12:02:13.622085296Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bc503bdf726dccf0d706dab1775e487f367552ff2e44a3887decfa09b53dbddc,Metadata:&PodSandboxMetadata{Name:nginx,Uid:9bfb1b5c-61d0-4012-be13-db86ac5187f6,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1724068793198981510,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bfb1b5c-61d0-4012-be13-db86ac5187f6,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T11:59:52.888429432Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0b05ccb41acf1463d69f0ba979b166924a2020e1c2a2a37ae7a4cc6f69a62901,Metadata:&PodSandboxMetadata{Name:busybox,Uid:2d584d27-474c-49b5-ab65-6b144518d97b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724068742719833115,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d584d27-474c-49b5-ab65-6b144518d97b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T11:59:02.112152040Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c2ae895d3ac20bff7e
5c663f4b46b77bffabb2f68a4add8b3ed9a66c7dab3885,Metadata:&PodSandboxMetadata{Name:metrics-server-8988944d9-6pvxf,Uid:19869c82-91f3-46fc-b815-fa4ff03b5433,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724068664958990742,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-8988944d9-6pvxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19869c82-91f3-46fc-b815-fa4ff03b5433,k8s-app: metrics-server,pod-template-hash: 8988944d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T11:57:44.638199556Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b9b34dc26097f4dc375901adc7dff37ef1583ef17e7cdf7a3fdf386b29295b0e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:62a5e8a3-5947-4048-92af-651cde360d04,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724068663477547025,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62a5e8a3-5947-4048-92af-651cde360d04,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-19T11:57:42.862837466Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:5a5d7317b47637ebba50faf9e0e92867b187eebc6f0fc83c1fce973f049355e3,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-86d989889c-mbpxv,Uid:ef0cf298-0354-4a2a-be6d-02396dc53b01,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724068663457036593,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-86d989889c-mbpxv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ef0cf298-0354-4a2a-be6d-02396dc53b01,pod-template-hash: 86d989889c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T11:57:43.143868094Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:63129ec0d0012c821f8ea28f54198d6e8241075007e58f29456105397eb4143e,Metadata:&PodSandboxMetadata{Name:kube-proxy-rpzwc,Uid:f58e40e6-11d0-415a-a1d4-96237cba48e7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724068659226657096,Labels:map[string]string{controller-revision-hash:
5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rpzwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f58e40e6-11d0-415a-a1d4-96237cba48e7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T11:57:38.621108975Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6db0b82d8669c4eb85c5a9f7ec2a3b5822d92b43199efb95979dee576507f425,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-tn2xl,Uid:0ffe02ab-9e5f-4a7a-80c5-e214a48d181e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724068659175654492,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-tn2xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffe02ab-9e5f-4a7a-80c5-e214a48d181e,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T11:57:38.864721880Z,kubernetes.io/config.source: api,},R
untimeHandler:,},&PodSandbox{Id:9492061e9db4c65f58eddf8a62389f1f873ffac8873f03aa02d1c0cc82a2ed45,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-442145,Uid:dedbd85d441e3e176dabeeab6fe5167a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724068648560532079,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedbd85d441e3e176dabeeab6fe5167a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dedbd85d441e3e176dabeeab6fe5167a,kubernetes.io/config.seen: 2024-08-19T11:57:27.881513716Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:94722e7e7accd80402bbfcf861800d95ee77c3f09dbe49410946d2dc4ae5fe44,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-442145,Uid:b73b959efa116aa64d205ed9688f57c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724068648560162870,Labels:map[
string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73b959efa116aa64d205ed9688f57c3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b73b959efa116aa64d205ed9688f57c3,kubernetes.io/config.seen: 2024-08-19T11:57:27.881512822Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:71d2a964740076660a0312a496bd256bb0c454e06b5f1a6494bc4b9c97a3b673,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-442145,Uid:fc4b177f9a48b550e434d5193f4cd9b2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724068648559650765,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4b177f9a48b550e434d5193f4cd9b2,tier: control-plane,},Annotations:map[string]string{kubeadm
.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.65:8443,kubernetes.io/config.hash: fc4b177f9a48b550e434d5193f4cd9b2,kubernetes.io/config.seen: 2024-08-19T11:57:27.881511656Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1f381cb328e5be82b728b1514daa8996e368391019d9df453223a7a1082c5238,Metadata:&PodSandboxMetadata{Name:etcd-addons-442145,Uid:9b2ab71ac6bdb1790527ad33531f67bb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724068648536231412,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b2ab71ac6bdb1790527ad33531f67bb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.65:2379,kubernetes.io/config.hash: 9b2ab71ac6bdb1790527ad33531f67bb,kubernetes.io/config.seen: 2024-08-19T11:57:27.881507575Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}"
file="otel-collector/interceptors.go:74" id=531e62c6-9944-4acc-8a1d-e68855512d74 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.299997829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd79df8a-d2c6-4f44-b86a-71ac1bc1b6fa name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.300066523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd79df8a-d2c6-4f44-b86a-71ac1bc1b6fa name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:04:57 addons-442145 crio[683]: time="2024-08-19 12:04:57.300667177Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a3367af52e1de9c99d1d9007f2f0d6eb293ccda3680455409baaa87ecabd017,PodSandboxId:1bc402b81fe5d458f728107331afb61d4a8f52b9ac6aee03f5df3dfd7e558403,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724068935000085416,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qnwt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6391c36c-7b77-4f53-8ee0-cb3bc71b5fb5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184c68c6e0f298f91f63b1eff3858b144fc02c728b578023e974c0f0735a9f1a,PodSandboxId:bc503bdf726dccf0d706dab1775e487f367552ff2e44a3887decfa09b53dbddc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724068795462759078,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bfb1b5c-61d0-4012-be13-db86ac5187f6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eaebe7e8a45474c7befe65b1d83fc8496d65e4e69a7670e1bbfb5f32692a1f8,PodSandboxId:0b05ccb41acf1463d69f0ba979b166924a2020e1c2a2a37ae7a4cc6f69a62901,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724068743769117881,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d584d27-474c-49b5-a
b65-6b144518d97b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d6d02f54a98908ec6f45d80711cdf3490f324e830028d14c9ee6693df40c83,PodSandboxId:c2ae895d3ac20bff7e5c663f4b46b77bffabb2f68a4add8b3ed9a66c7dab3885,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724068703464795209,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-6pvxf,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 19869c82-91f3-46fc-b815-fa4ff03b5433,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6254752d6998253fa47eead78cbe0e1514cdf304ccb3a52421bfe650a723734b,PodSandboxId:5a5d7317b47637ebba50faf9e0e92867b187eebc6f0fc83c1fce973f049355e3,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724068692879559122,Labels:map[string]string
{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-mbpxv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ef0cf298-0354-4a2a-be6d-02396dc53b01,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82b35f9f0260bb94eed8da803708e5a8bcf4e2a0139f2aaab30653e6c10551,PodSandboxId:b9b34dc26097f4dc375901adc7dff37ef1583ef17e7cdf7a3fdf386b29295b0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724068664806319452,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62a5e8a3-5947-4048-92af-651cde360d04,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e962e7b4514778fe56d55b14caf1ec77ec713b9a9d62b25d2d19c49851987e,PodSandboxId:6db0b82d8669c4eb85c5a9f7ec2a3b5822d92b43199efb95979dee576507f425,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724068662298469011,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tn2xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffe02ab-9e5f-4a7a-80c5-e214a48d181e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201fc6e23ba9a5c4eb11c2f9cf4447c01e3c9afda6c8115421dab1ffff9b71ec,PodSandboxId:63129ec0d0012c821f8ea28f54198d6e8241075007e58f29456105397eb4143e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724068659954672034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rpzwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f58e40e6-11d0-415a-a1d4-96237cba48e7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e89f25b767633cfed93e529e540d5bafb5fbacb44e3dc7a21ef6f9742909f2,PodSandboxId:1f381cb328e5be82b728b1514daa8996e368391019d9df453223a7a1082c5238,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724068648802614329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b2ab71ac6bdb1790527ad33531f67bb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a1084dc30582bb85824ed05e3914b6636b95d7bdaf5f86a0c84e2d6487da08,PodSandboxId:9492061e9db4c65f58eddf8a62389f1f873ffac8873f03aa02d1c0cc82a2ed45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e67414
62f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724068648771980537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedbd85d441e3e176dabeeab6fe5167a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6da52004917069ff1a6f3951bcf805ea3b922d5a4b4a01d584c1ae4353ea4a2,PodSandboxId:71d2a964740076660a0312a496bd256bb0c454e06b5f1a6494bc4b9c97a3b673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babc
a2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724068648790338401,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4b177f9a48b550e434d5193f4cd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0fd6bc35b7149a82129e12c8bfd32cdc28aa80b4906cde08861b6f07cd9365,PodSandboxId:94722e7e7accd80402bbfcf861800d95ee77c3f09dbe49410946d2dc4ae5fe44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7
a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724068648681924705,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-442145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73b959efa116aa64d205ed9688f57c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd79df8a-d2c6-4f44-b86a-71ac1bc1b6fa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7a3367af52e1d       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   1bc402b81fe5d       hello-world-app-55bf9c44b4-qnwt7
	184c68c6e0f29       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         5 minutes ago       Running             nginx                     0                   bc503bdf726dc       nginx
	1eaebe7e8a454       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   0b05ccb41acf1       busybox
	e3d6d02f54a98       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   c2ae895d3ac20       metrics-server-8988944d9-6pvxf
	6254752d69982       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        6 minutes ago       Running             local-path-provisioner    0                   5a5d7317b4763       local-path-provisioner-86d989889c-mbpxv
	6c82b35f9f026       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   b9b34dc26097f       storage-provisioner
	45e962e7b4514       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   6db0b82d8669c       coredns-6f6b679f8f-tn2xl
	201fc6e23ba9a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        7 minutes ago       Running             kube-proxy                0                   63129ec0d0012       kube-proxy-rpzwc
	21e89f25b7676       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   1f381cb328e5b       etcd-addons-442145
	f6da520049170       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        7 minutes ago       Running             kube-apiserver            0                   71d2a96474007       kube-apiserver-addons-442145
	06a1084dc3058       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        7 minutes ago       Running             kube-scheduler            0                   9492061e9db4c       kube-scheduler-addons-442145
	ac0fd6bc35b71       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        7 minutes ago       Running             kube-controller-manager   0                   94722e7e7accd       kube-controller-manager-addons-442145
	
	
	==> coredns [45e962e7b4514778fe56d55b14caf1ec77ec713b9a9d62b25d2d19c49851987e] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[196058232]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 11:57:43.547) (total time: 30001ms):
	Trace[196058232]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (11:58:13.548)
	Trace[196058232]: [30.001314792s] [30.001314792s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1485178553]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 11:57:43.553) (total time: 30000ms):
	Trace[1485178553]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:58:13.554)
	Trace[1485178553]: [30.00061879s] [30.00061879s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51174 - 43396 "HINFO IN 8104933949717401389.3285066136651019283. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009295986s
	[INFO] 10.244.0.22:34039 - 7240 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000512313s
	[INFO] 10.244.0.22:45891 - 23057 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000213176s
	[INFO] 10.244.0.22:48975 - 6272 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000146151s
	[INFO] 10.244.0.22:48837 - 15818 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084326s
	[INFO] 10.244.0.22:60646 - 64239 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000100267s
	[INFO] 10.244.0.22:45682 - 53888 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000056222s
	[INFO] 10.244.0.22:53667 - 61607 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000599993s
	[INFO] 10.244.0.22:43513 - 37327 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000599423s
	[INFO] 10.244.0.28:59444 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00025949s
	[INFO] 10.244.0.28:48261 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000068857s
	
	
	==> describe nodes <==
	Name:               addons-442145
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-442145
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=addons-442145
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T11_57_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-442145
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:57:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-442145
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:04:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:02:39 +0000   Mon, 19 Aug 2024 11:57:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:02:39 +0000   Mon, 19 Aug 2024 11:57:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:02:39 +0000   Mon, 19 Aug 2024 11:57:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:02:39 +0000   Mon, 19 Aug 2024 11:57:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    addons-442145
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc55cc0793b94af2ad0daa5ec39f8ff5
	  System UUID:                fc55cc07-93b9-4af2-ad0d-aa5ec39f8ff5
	  Boot ID:                    c2eabfee-0247-40b5-b3a1-42dd2b5a4e27
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  default                     hello-world-app-55bf9c44b4-qnwt7           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 coredns-6f6b679f8f-tn2xl                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m19s
	  kube-system                 etcd-addons-442145                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m26s
	  kube-system                 kube-apiserver-addons-442145               250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 kube-controller-manager-addons-442145      200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 kube-proxy-rpzwc                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m19s
	  kube-system                 kube-scheduler-addons-442145               100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 metrics-server-8988944d9-6pvxf             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m13s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m15s
	  local-path-storage          local-path-provisioner-86d989889c-mbpxv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m15s                  kube-proxy       
	  Normal  Starting                 7m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m29s (x8 over 7m30s)  kubelet          Node addons-442145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m29s (x8 over 7m30s)  kubelet          Node addons-442145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m29s (x7 over 7m30s)  kubelet          Node addons-442145 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m24s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m24s                  kubelet          Node addons-442145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m24s                  kubelet          Node addons-442145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m24s                  kubelet          Node addons-442145 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m23s                  kubelet          Node addons-442145 status is now: NodeReady
	  Normal  RegisteredNode           7m20s                  node-controller  Node addons-442145 event: Registered Node addons-442145 in Controller
	
	
	==> dmesg <==
	[  +4.729379] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +1.045668] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.079164] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.119117] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.251088] kauditd_printk_skb: 46 callbacks suppressed
	[Aug19 11:58] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.096849] kauditd_printk_skb: 32 callbacks suppressed
	[  +7.116963] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.492986] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.261106] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.148929] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.025079] kauditd_printk_skb: 64 callbacks suppressed
	[ +10.425161] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.024301] kauditd_printk_skb: 15 callbacks suppressed
	[Aug19 11:59] kauditd_printk_skb: 56 callbacks suppressed
	[ +21.161050] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.917815] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.049803] kauditd_printk_skb: 82 callbacks suppressed
	[  +5.270725] kauditd_printk_skb: 22 callbacks suppressed
	[  +7.183051] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.007791] kauditd_printk_skb: 47 callbacks suppressed
	[Aug19 12:00] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.277456] kauditd_printk_skb: 42 callbacks suppressed
	[Aug19 12:02] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.569659] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [21e89f25b767633cfed93e529e540d5bafb5fbacb44e3dc7a21ef6f9742909f2] <==
	{"level":"info","ts":"2024-08-19T11:58:55.405519Z","caller":"traceutil/trace.go:171","msg":"trace[983053068] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1157; }","duration":"276.557046ms","start":"2024-08-19T11:58:55.128950Z","end":"2024-08-19T11:58:55.405507Z","steps":["trace[983053068] 'agreement among raft nodes before linearized reading'  (duration: 274.382506ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:58:55.403550Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"292.525327ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:58:55.405751Z","caller":"traceutil/trace.go:171","msg":"trace[1820371117] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1157; }","duration":"294.668605ms","start":"2024-08-19T11:58:55.111018Z","end":"2024-08-19T11:58:55.405687Z","steps":["trace[1820371117] 'agreement among raft nodes before linearized reading'  (duration: 292.512483ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:59:28.563060Z","caller":"traceutil/trace.go:171","msg":"trace[1562016742] transaction","detail":"{read_only:false; response_revision:1361; number_of_response:1; }","duration":"101.174403ms","start":"2024-08-19T11:59:28.461861Z","end":"2024-08-19T11:59:28.563036Z","steps":["trace[1562016742] 'process raft request'  (duration: 100.874093ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:59:40.786719Z","caller":"traceutil/trace.go:171","msg":"trace[1287512661] linearizableReadLoop","detail":"{readStateIndex:1505; appliedIndex:1504; }","duration":"171.731169ms","start":"2024-08-19T11:59:40.614960Z","end":"2024-08-19T11:59:40.786692Z","steps":["trace[1287512661] 'read index received'  (duration: 171.610324ms)","trace[1287512661] 'applied index is now lower than readState.Index'  (duration: 120.343µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T11:59:40.786959Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.998671ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-08-19T11:59:40.787025Z","caller":"traceutil/trace.go:171","msg":"trace[1326557946] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1457; }","duration":"172.080904ms","start":"2024-08-19T11:59:40.614931Z","end":"2024-08-19T11:59:40.787012Z","steps":["trace[1326557946] 'agreement among raft nodes before linearized reading'  (duration: 171.909653ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:59:40.787026Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.14362ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:59:40.787063Z","caller":"traceutil/trace.go:171","msg":"trace[394561874] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1457; }","duration":"152.18756ms","start":"2024-08-19T11:59:40.634868Z","end":"2024-08-19T11:59:40.787056Z","steps":["trace[394561874] 'agreement among raft nodes before linearized reading'  (duration: 152.128388ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:59:40.787194Z","caller":"traceutil/trace.go:171","msg":"trace[557107817] transaction","detail":"{read_only:false; response_revision:1457; number_of_response:1; }","duration":"266.610427ms","start":"2024-08-19T11:59:40.520573Z","end":"2024-08-19T11:59:40.787184Z","steps":["trace[557107817] 'process raft request'  (duration: 266.040078ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:59:55.070091Z","caller":"traceutil/trace.go:171","msg":"trace[1979777551] linearizableReadLoop","detail":"{readStateIndex:1681; appliedIndex:1680; }","duration":"122.471569ms","start":"2024-08-19T11:59:54.947549Z","end":"2024-08-19T11:59:55.070021Z","steps":["trace[1979777551] 'read index received'  (duration: 122.33759ms)","trace[1979777551] 'applied index is now lower than readState.Index'  (duration: 133.528µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T11:59:55.070427Z","caller":"traceutil/trace.go:171","msg":"trace[1493306816] transaction","detail":"{read_only:false; response_revision:1623; number_of_response:1; }","duration":"163.4653ms","start":"2024-08-19T11:59:54.906948Z","end":"2024-08-19T11:59:55.070413Z","steps":["trace[1493306816] 'process raft request'  (duration: 162.983422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:59:55.070621Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.005189ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:59:55.070642Z","caller":"traceutil/trace.go:171","msg":"trace[1727256531] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1623; }","duration":"123.090561ms","start":"2024-08-19T11:59:54.947545Z","end":"2024-08-19T11:59:55.070636Z","steps":["trace[1727256531] 'agreement among raft nodes before linearized reading'  (duration: 122.990283ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:59:55.070793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.337713ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:59:55.070810Z","caller":"traceutil/trace.go:171","msg":"trace[1889767900] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1623; }","duration":"106.362166ms","start":"2024-08-19T11:59:54.964443Z","end":"2024-08-19T11:59:55.070806Z","steps":["trace[1889767900] 'agreement among raft nodes before linearized reading'  (duration: 106.329518ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T12:00:00.333643Z","caller":"traceutil/trace.go:171","msg":"trace[921500174] transaction","detail":"{read_only:false; response_revision:1683; number_of_response:1; }","duration":"338.908247ms","start":"2024-08-19T11:59:59.994719Z","end":"2024-08-19T12:00:00.333628Z","steps":["trace[921500174] 'process raft request'  (duration: 338.787954ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:00:00.333778Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T11:59:59.994647Z","time spent":"339.053518ms","remote":"127.0.0.1:33356","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":591,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/ingress/default/nginx-ingress\" mod_revision:1594 > success:<request_put:<key:\"/registry/ingress/default/nginx-ingress\" value_size:544 >> failure:<request_range:<key:\"/registry/ingress/default/nginx-ingress\" > >"}
	{"level":"info","ts":"2024-08-19T12:00:00.336335Z","caller":"traceutil/trace.go:171","msg":"trace[203872634] linearizableReadLoop","detail":"{readStateIndex:1743; appliedIndex:1742; }","duration":"223.718948ms","start":"2024-08-19T12:00:00.112603Z","end":"2024-08-19T12:00:00.336322Z","steps":["trace[203872634] 'read index received'  (duration: 221.060936ms)","trace[203872634] 'applied index is now lower than readState.Index'  (duration: 2.657559ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T12:00:00.336608Z","caller":"traceutil/trace.go:171","msg":"trace[1022449697] transaction","detail":"{read_only:false; response_revision:1684; number_of_response:1; }","duration":"228.477598ms","start":"2024-08-19T12:00:00.108121Z","end":"2024-08-19T12:00:00.336599Z","steps":["trace[1022449697] 'process raft request'  (duration: 228.099048ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:00:00.337143Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.524432ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T12:00:00.337287Z","caller":"traceutil/trace.go:171","msg":"trace[341485831] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1684; }","duration":"224.633578ms","start":"2024-08-19T12:00:00.112600Z","end":"2024-08-19T12:00:00.337233Z","steps":["trace[341485831] 'agreement among raft nodes before linearized reading'  (duration: 224.509934ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:00:00.337519Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.290233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T12:00:00.337691Z","caller":"traceutil/trace.go:171","msg":"trace[2055964969] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1684; }","duration":"126.465449ms","start":"2024-08-19T12:00:00.211219Z","end":"2024-08-19T12:00:00.337684Z","steps":["trace[2055964969] 'agreement among raft nodes before linearized reading'  (duration: 126.278854ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T12:00:31.465488Z","caller":"traceutil/trace.go:171","msg":"trace[1356514116] transaction","detail":"{read_only:false; response_revision:1866; number_of_response:1; }","duration":"218.737492ms","start":"2024-08-19T12:00:31.246731Z","end":"2024-08-19T12:00:31.465469Z","steps":["trace[1356514116] 'process raft request'  (duration: 218.616234ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:04:57 up 8 min,  0 users,  load average: 0.44, 0.74, 0.52
	Linux addons-442145 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f6da52004917069ff1a6f3951bcf805ea3b922d5a4b4a01d584c1ae4353ea4a2] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0819 11:59:24.982973       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.126.149:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.126.149:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.126.149:443: connect: connection refused" logger="UnhandledError"
	E0819 11:59:24.984687       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.126.149:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.126.149:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.126.149:443: connect: connection refused" logger="UnhandledError"
	I0819 11:59:25.042595       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0819 11:59:47.699155       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0819 11:59:50.838664       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 11:59:51.869613       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 11:59:52.749665       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 11:59:52.932582       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.136.165"}
	I0819 11:59:57.150761       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.88.125"}
	I0819 12:00:11.618898       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:00:11.618969       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:00:11.649643       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:00:11.650428       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:00:11.660609       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:00:11.660665       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:00:11.684918       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:00:11.685079       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 12:00:11.729148       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 12:00:11.729232       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 12:00:12.649514       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 12:00:12.730354       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0819 12:00:12.831614       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0819 12:02:13.804966       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.118.71"}
	
	
	==> kube-controller-manager [ac0fd6bc35b7149a82129e12c8bfd32cdc28aa80b4906cde08861b6f07cd9365] <==
	W0819 12:02:52.882691       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:02:52.882834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:03:16.236044       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:03:16.236195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:03:19.585515       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:03:19.585673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:03:36.420746       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:03:36.420808       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:03:48.281092       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:03:48.281216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:04:05.159682       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:04:05.159761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:04:10.864472       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:04:10.864533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:04:21.245600       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:04:21.245810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:04:33.235194       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:04:33.235356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:04:41.110494       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:04:41.110627       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:04:46.364810       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:04:46.364942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 12:04:56.167025       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 12:04:56.167082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 12:04:56.341235       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="9.392µs"
	
	
	==> kube-proxy [201fc6e23ba9a5c4eb11c2f9cf4447c01e3c9afda6c8115421dab1ffff9b71ec] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 11:57:41.366678       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 11:57:41.453806       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.65"]
	E0819 11:57:41.453886       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 11:57:42.211390       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 11:57:42.211452       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 11:57:42.211491       1 server_linux.go:169] "Using iptables Proxier"
	I0819 11:57:42.397758       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 11:57:42.398090       1 server.go:483] "Version info" version="v1.31.0"
	I0819 11:57:42.398129       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:57:42.404736       1 config.go:197] "Starting service config controller"
	I0819 11:57:42.404787       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 11:57:42.404809       1 config.go:104] "Starting endpoint slice config controller"
	I0819 11:57:42.404813       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 11:57:42.405208       1 config.go:326] "Starting node config controller"
	I0819 11:57:42.405215       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 11:57:42.505364       1 shared_informer.go:320] Caches are synced for node config
	I0819 11:57:42.505389       1 shared_informer.go:320] Caches are synced for service config
	I0819 11:57:42.505414       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [06a1084dc30582bb85824ed05e3914b6636b95d7bdaf5f86a0c84e2d6487da08] <==
	W0819 11:57:31.809476       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 11:57:31.809537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:31.866827       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 11:57:31.866879       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 11:57:31.946987       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 11:57:31.947043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:31.978012       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 11:57:31.978102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.007049       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 11:57:32.007101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.036779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 11:57:32.036832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.088489       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 11:57:32.088543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.156867       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 11:57:32.156922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.186169       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 11:57:32.186219       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.193420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 11:57:32.193498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.211817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 11:57:32.211869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:57:32.315056       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 11:57:32.315125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 11:57:33.581968       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 12:03:54 addons-442145 kubelet[1228]: E0819 12:03:54.058606    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069034057899886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:02 addons-442145 kubelet[1228]: I0819 12:04:02.703676    1228 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 12:04:04 addons-442145 kubelet[1228]: E0819 12:04:04.063469    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069044062936843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:04 addons-442145 kubelet[1228]: E0819 12:04:04.063508    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069044062936843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:14 addons-442145 kubelet[1228]: E0819 12:04:14.066458    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069054066065156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:14 addons-442145 kubelet[1228]: E0819 12:04:14.066512    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069054066065156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:24 addons-442145 kubelet[1228]: E0819 12:04:24.069336    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069064068866839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:24 addons-442145 kubelet[1228]: E0819 12:04:24.069828    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069064068866839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:33 addons-442145 kubelet[1228]: E0819 12:04:33.757494    1228 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 12:04:33 addons-442145 kubelet[1228]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 12:04:33 addons-442145 kubelet[1228]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 12:04:33 addons-442145 kubelet[1228]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 12:04:33 addons-442145 kubelet[1228]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 12:04:34 addons-442145 kubelet[1228]: E0819 12:04:34.072495    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069074072033000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:34 addons-442145 kubelet[1228]: E0819 12:04:34.072521    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069074072033000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:44 addons-442145 kubelet[1228]: E0819 12:04:44.078410    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069084077644626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:44 addons-442145 kubelet[1228]: E0819 12:04:44.078453    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069084077644626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:54 addons-442145 kubelet[1228]: E0819 12:04:54.081089    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069094080685711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:54 addons-442145 kubelet[1228]: E0819 12:04:54.081588    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069094080685711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:04:57 addons-442145 kubelet[1228]: I0819 12:04:57.740756    1228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/19869c82-91f3-46fc-b815-fa4ff03b5433-tmp-dir\") pod \"19869c82-91f3-46fc-b815-fa4ff03b5433\" (UID: \"19869c82-91f3-46fc-b815-fa4ff03b5433\") "
	Aug 19 12:04:57 addons-442145 kubelet[1228]: I0819 12:04:57.740804    1228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqjm2\" (UniqueName: \"kubernetes.io/projected/19869c82-91f3-46fc-b815-fa4ff03b5433-kube-api-access-pqjm2\") pod \"19869c82-91f3-46fc-b815-fa4ff03b5433\" (UID: \"19869c82-91f3-46fc-b815-fa4ff03b5433\") "
	Aug 19 12:04:57 addons-442145 kubelet[1228]: I0819 12:04:57.741415    1228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19869c82-91f3-46fc-b815-fa4ff03b5433-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "19869c82-91f3-46fc-b815-fa4ff03b5433" (UID: "19869c82-91f3-46fc-b815-fa4ff03b5433"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 19 12:04:57 addons-442145 kubelet[1228]: I0819 12:04:57.744418    1228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19869c82-91f3-46fc-b815-fa4ff03b5433-kube-api-access-pqjm2" (OuterVolumeSpecName: "kube-api-access-pqjm2") pod "19869c82-91f3-46fc-b815-fa4ff03b5433" (UID: "19869c82-91f3-46fc-b815-fa4ff03b5433"). InnerVolumeSpecName "kube-api-access-pqjm2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 12:04:57 addons-442145 kubelet[1228]: I0819 12:04:57.841671    1228 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/19869c82-91f3-46fc-b815-fa4ff03b5433-tmp-dir\") on node \"addons-442145\" DevicePath \"\""
	Aug 19 12:04:57 addons-442145 kubelet[1228]: I0819 12:04:57.841698    1228 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pqjm2\" (UniqueName: \"kubernetes.io/projected/19869c82-91f3-46fc-b815-fa4ff03b5433-kube-api-access-pqjm2\") on node \"addons-442145\" DevicePath \"\""
	
	
	==> storage-provisioner [6c82b35f9f0260bb94eed8da803708e5a8bcf4e2a0139f2aaab30653e6c10551] <==
	I0819 11:57:45.993938       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 11:57:46.077537       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 11:57:46.077609       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 11:57:46.162351       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 11:57:46.168605       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-442145_63aeb766-6c92-4616-afe4-7f149d64b057!
	I0819 11:57:46.168723       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"21bb4f94-56c4-42f5-8085-2d508c8df71f", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-442145_63aeb766-6c92-4616-afe4-7f149d64b057 became leader
	I0819 11:57:46.269624       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-442145_63aeb766-6c92-4616-afe4-7f149d64b057!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-442145 -n addons-442145
helpers_test.go:261: (dbg) Run:  kubectl --context addons-442145 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (317.32s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-442145
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-442145: exit status 82 (2m0.480865389s)

                                                
                                                
-- stdout --
	* Stopping node "addons-442145"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-442145" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-442145
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-442145: exit status 11 (21.513836637s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.65:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-442145" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-442145
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-442145: exit status 11 (6.143034678s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.65:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-442145" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-442145
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-442145: exit status 11 (6.144700903s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.65:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-442145" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p functional-102083 image ls --format short --alsologtostderr: (2.297819257s)
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-102083 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-102083 image ls --format short --alsologtostderr:
I0819 12:11:47.820345 2103790 out.go:345] Setting OutFile to fd 1 ...
I0819 12:11:47.820489 2103790 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:11:47.820500 2103790 out.go:358] Setting ErrFile to fd 2...
I0819 12:11:47.820507 2103790 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:11:47.820755 2103790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
I0819 12:11:47.821483 2103790 config.go:182] Loaded profile config "functional-102083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:11:47.821625 2103790 config.go:182] Loaded profile config "functional-102083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:11:47.822185 2103790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 12:11:47.822254 2103790 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 12:11:47.837859 2103790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36963
I0819 12:11:47.838328 2103790 main.go:141] libmachine: () Calling .GetVersion
I0819 12:11:47.839014 2103790 main.go:141] libmachine: Using API Version  1
I0819 12:11:47.839071 2103790 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 12:11:47.839422 2103790 main.go:141] libmachine: () Calling .GetMachineName
I0819 12:11:47.839643 2103790 main.go:141] libmachine: (functional-102083) Calling .GetState
I0819 12:11:47.841680 2103790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 12:11:47.841734 2103790 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 12:11:47.857442 2103790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
I0819 12:11:47.857938 2103790 main.go:141] libmachine: () Calling .GetVersion
I0819 12:11:47.858508 2103790 main.go:141] libmachine: Using API Version  1
I0819 12:11:47.858536 2103790 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 12:11:47.858868 2103790 main.go:141] libmachine: () Calling .GetMachineName
I0819 12:11:47.859084 2103790 main.go:141] libmachine: (functional-102083) Calling .DriverName
I0819 12:11:47.859268 2103790 ssh_runner.go:195] Run: systemctl --version
I0819 12:11:47.859324 2103790 main.go:141] libmachine: (functional-102083) Calling .GetSSHHostname
I0819 12:11:47.862606 2103790 main.go:141] libmachine: (functional-102083) DBG | domain functional-102083 has defined MAC address 52:54:00:4e:6e:e6 in network mk-functional-102083
I0819 12:11:47.863065 2103790 main.go:141] libmachine: (functional-102083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:e6", ip: ""} in network mk-functional-102083: {Iface:virbr1 ExpiryTime:2024-08-19 13:08:43 +0000 UTC Type:0 Mac:52:54:00:4e:6e:e6 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:functional-102083 Clientid:01:52:54:00:4e:6e:e6}
I0819 12:11:47.863099 2103790 main.go:141] libmachine: (functional-102083) DBG | domain functional-102083 has defined IP address 192.168.39.110 and MAC address 52:54:00:4e:6e:e6 in network mk-functional-102083
I0819 12:11:47.863220 2103790 main.go:141] libmachine: (functional-102083) Calling .GetSSHPort
I0819 12:11:47.863403 2103790 main.go:141] libmachine: (functional-102083) Calling .GetSSHKeyPath
I0819 12:11:47.863576 2103790 main.go:141] libmachine: (functional-102083) Calling .GetSSHUsername
I0819 12:11:47.863726 2103790 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/functional-102083/id_rsa Username:docker}
I0819 12:11:47.982655 2103790 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 12:11:50.044681 2103790 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.061984742s)
W0819 12:11:50.044757 2103790 cache_images.go:734] Failed to list images for profile functional-102083 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E0819 12:11:50.030123    8419 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2024-08-19T12:11:50Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I0819 12:11:50.044812 2103790 main.go:141] libmachine: Making call to close driver server
I0819 12:11:50.044826 2103790 main.go:141] libmachine: (functional-102083) Calling .Close
I0819 12:11:50.045169 2103790 main.go:141] libmachine: Successfully made call to close driver server
I0819 12:11:50.045192 2103790 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 12:11:50.045201 2103790 main.go:141] libmachine: Making call to close driver server
I0819 12:11:50.045210 2103790 main.go:141] libmachine: (functional-102083) Calling .Close
I0819 12:11:50.045500 2103790 main.go:141] libmachine: Successfully made call to close driver server
I0819 12:11:50.045526 2103790 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 12:11:50.045532 2103790 main.go:141] libmachine: (functional-102083) DBG | Closing plugin on server side
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 node stop m02 -v=7 --alsologtostderr
E0819 12:16:29.237449 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:16:31.799605 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:16:36.921927 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:16:47.164211 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:17:07.646108 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:17:48.607527 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-584983 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.457767598s)

                                                
                                                
-- stdout --
	* Stopping node "ha-584983-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:16:29.039601 2108155 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:16:29.039861 2108155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:16:29.039874 2108155 out.go:358] Setting ErrFile to fd 2...
	I0819 12:16:29.039879 2108155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:16:29.040041 2108155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:16:29.040322 2108155 mustload.go:65] Loading cluster: ha-584983
	I0819 12:16:29.040685 2108155 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:16:29.040702 2108155 stop.go:39] StopHost: ha-584983-m02
	I0819 12:16:29.041044 2108155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:16:29.041094 2108155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:16:29.057870 2108155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37735
	I0819 12:16:29.058314 2108155 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:16:29.058835 2108155 main.go:141] libmachine: Using API Version  1
	I0819 12:16:29.058861 2108155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:16:29.059211 2108155 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:16:29.061111 2108155 out.go:177] * Stopping node "ha-584983-m02"  ...
	I0819 12:16:29.062413 2108155 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 12:16:29.062443 2108155 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:16:29.062696 2108155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 12:16:29.062742 2108155 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:16:29.065804 2108155 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:16:29.066236 2108155 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:16:29.066274 2108155 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:16:29.066393 2108155 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:16:29.066564 2108155 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:16:29.066729 2108155 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:16:29.066882 2108155 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	I0819 12:16:29.148082 2108155 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 12:16:29.202058 2108155 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 12:16:29.256371 2108155 main.go:141] libmachine: Stopping "ha-584983-m02"...
	I0819 12:16:29.256414 2108155 main.go:141] libmachine: (ha-584983-m02) Calling .GetState
	I0819 12:16:29.257860 2108155 main.go:141] libmachine: (ha-584983-m02) Calling .Stop
	I0819 12:16:29.261457 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 0/120
	I0819 12:16:30.263130 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 1/120
	I0819 12:16:31.264627 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 2/120
	I0819 12:16:32.266603 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 3/120
	I0819 12:16:33.267882 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 4/120
	I0819 12:16:34.269856 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 5/120
	I0819 12:16:35.271125 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 6/120
	I0819 12:16:36.272801 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 7/120
	I0819 12:16:37.274068 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 8/120
	I0819 12:16:38.275527 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 9/120
	I0819 12:16:39.277598 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 10/120
	I0819 12:16:40.278981 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 11/120
	I0819 12:16:41.280996 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 12/120
	I0819 12:16:42.282242 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 13/120
	I0819 12:16:43.283805 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 14/120
	I0819 12:16:44.285892 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 15/120
	I0819 12:16:45.287446 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 16/120
	I0819 12:16:46.288789 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 17/120
	I0819 12:16:47.290077 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 18/120
	I0819 12:16:48.291595 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 19/120
	I0819 12:16:49.293609 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 20/120
	I0819 12:16:50.294845 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 21/120
	I0819 12:16:51.296910 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 22/120
	I0819 12:16:52.298565 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 23/120
	I0819 12:16:53.299869 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 24/120
	I0819 12:16:54.301820 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 25/120
	I0819 12:16:55.303101 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 26/120
	I0819 12:16:56.304553 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 27/120
	I0819 12:16:57.305892 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 28/120
	I0819 12:16:58.307185 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 29/120
	I0819 12:16:59.309397 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 30/120
	I0819 12:17:00.310725 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 31/120
	I0819 12:17:01.312095 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 32/120
	I0819 12:17:02.313569 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 33/120
	I0819 12:17:03.315027 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 34/120
	I0819 12:17:04.316361 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 35/120
	I0819 12:17:05.318758 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 36/120
	I0819 12:17:06.320121 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 37/120
	I0819 12:17:07.321421 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 38/120
	I0819 12:17:08.322719 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 39/120
	I0819 12:17:09.324608 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 40/120
	I0819 12:17:10.326742 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 41/120
	I0819 12:17:11.328181 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 42/120
	I0819 12:17:12.330299 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 43/120
	I0819 12:17:13.331524 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 44/120
	I0819 12:17:14.333536 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 45/120
	I0819 12:17:15.334924 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 46/120
	I0819 12:17:16.336292 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 47/120
	I0819 12:17:17.337678 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 48/120
	I0819 12:17:18.338975 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 49/120
	I0819 12:17:19.340320 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 50/120
	I0819 12:17:20.341687 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 51/120
	I0819 12:17:21.343016 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 52/120
	I0819 12:17:22.344671 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 53/120
	I0819 12:17:23.346110 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 54/120
	I0819 12:17:24.348108 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 55/120
	I0819 12:17:25.349395 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 56/120
	I0819 12:17:26.350724 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 57/120
	I0819 12:17:27.352154 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 58/120
	I0819 12:17:28.354258 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 59/120
	I0819 12:17:29.356149 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 60/120
	I0819 12:17:30.357508 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 61/120
	I0819 12:17:31.358988 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 62/120
	I0819 12:17:32.360288 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 63/120
	I0819 12:17:33.361583 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 64/120
	I0819 12:17:34.363659 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 65/120
	I0819 12:17:35.365885 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 66/120
	I0819 12:17:36.368302 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 67/120
	I0819 12:17:37.369803 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 68/120
	I0819 12:17:38.371043 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 69/120
	I0819 12:17:39.373198 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 70/120
	I0819 12:17:40.374679 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 71/120
	I0819 12:17:41.376142 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 72/120
	I0819 12:17:42.377620 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 73/120
	I0819 12:17:43.379231 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 74/120
	I0819 12:17:44.380966 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 75/120
	I0819 12:17:45.382768 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 76/120
	I0819 12:17:46.384085 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 77/120
	I0819 12:17:47.385691 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 78/120
	I0819 12:17:48.387037 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 79/120
	I0819 12:17:49.389398 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 80/120
	I0819 12:17:50.390606 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 81/120
	I0819 12:17:51.392048 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 82/120
	I0819 12:17:52.393425 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 83/120
	I0819 12:17:53.395024 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 84/120
	I0819 12:17:54.396516 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 85/120
	I0819 12:17:55.398637 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 86/120
	I0819 12:17:56.400026 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 87/120
	I0819 12:17:57.401710 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 88/120
	I0819 12:17:58.402992 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 89/120
	I0819 12:17:59.404962 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 90/120
	I0819 12:18:00.406764 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 91/120
	I0819 12:18:01.408937 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 92/120
	I0819 12:18:02.410241 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 93/120
	I0819 12:18:03.411655 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 94/120
	I0819 12:18:04.413745 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 95/120
	I0819 12:18:05.414941 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 96/120
	I0819 12:18:06.416835 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 97/120
	I0819 12:18:07.418109 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 98/120
	I0819 12:18:08.419692 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 99/120
	I0819 12:18:09.421693 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 100/120
	I0819 12:18:10.423366 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 101/120
	I0819 12:18:11.424802 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 102/120
	I0819 12:18:12.426108 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 103/120
	I0819 12:18:13.427392 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 104/120
	I0819 12:18:14.429401 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 105/120
	I0819 12:18:15.430870 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 106/120
	I0819 12:18:16.432356 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 107/120
	I0819 12:18:17.433700 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 108/120
	I0819 12:18:18.435542 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 109/120
	I0819 12:18:19.437567 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 110/120
	I0819 12:18:20.439041 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 111/120
	I0819 12:18:21.440397 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 112/120
	I0819 12:18:22.442653 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 113/120
	I0819 12:18:23.443953 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 114/120
	I0819 12:18:24.445722 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 115/120
	I0819 12:18:25.447036 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 116/120
	I0819 12:18:26.448318 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 117/120
	I0819 12:18:27.449799 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 118/120
	I0819 12:18:28.451419 2108155 main.go:141] libmachine: (ha-584983-m02) Waiting for machine to stop 119/120
	I0819 12:18:29.452545 2108155 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 12:18:29.452694 2108155 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-584983 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr: exit status 3 (19.151794692s)

                                                
                                                
-- stdout --
	ha-584983
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-584983-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:18:29.497798 2109039 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:18:29.497911 2109039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:18:29.497917 2109039 out.go:358] Setting ErrFile to fd 2...
	I0819 12:18:29.497923 2109039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:18:29.498105 2109039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:18:29.498271 2109039 out.go:352] Setting JSON to false
	I0819 12:18:29.498302 2109039 mustload.go:65] Loading cluster: ha-584983
	I0819 12:18:29.498437 2109039 notify.go:220] Checking for updates...
	I0819 12:18:29.498852 2109039 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:18:29.498879 2109039 status.go:255] checking status of ha-584983 ...
	I0819 12:18:29.499363 2109039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:29.499425 2109039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:29.521049 2109039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36127
	I0819 12:18:29.521538 2109039 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:29.522185 2109039 main.go:141] libmachine: Using API Version  1
	I0819 12:18:29.522209 2109039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:29.522643 2109039 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:29.522908 2109039 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:18:29.524602 2109039 status.go:330] ha-584983 host status = "Running" (err=<nil>)
	I0819 12:18:29.524626 2109039 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:18:29.524920 2109039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:29.524979 2109039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:29.541358 2109039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40037
	I0819 12:18:29.541718 2109039 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:29.542156 2109039 main.go:141] libmachine: Using API Version  1
	I0819 12:18:29.542181 2109039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:29.542479 2109039 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:29.542704 2109039 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:18:29.545870 2109039 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:18:29.546407 2109039 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:18:29.546434 2109039 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:18:29.546571 2109039 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:18:29.546863 2109039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:29.546902 2109039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:29.563109 2109039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34443
	I0819 12:18:29.563562 2109039 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:29.564003 2109039 main.go:141] libmachine: Using API Version  1
	I0819 12:18:29.564026 2109039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:29.564477 2109039 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:29.564637 2109039 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:18:29.564829 2109039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:18:29.564857 2109039 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:18:29.567357 2109039 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:18:29.567848 2109039 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:18:29.567894 2109039 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:18:29.567954 2109039 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:18:29.568138 2109039 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:18:29.568352 2109039 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:18:29.568607 2109039 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:18:29.659908 2109039 ssh_runner.go:195] Run: systemctl --version
	I0819 12:18:29.668071 2109039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:18:29.685252 2109039 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:18:29.685306 2109039 api_server.go:166] Checking apiserver status ...
	I0819 12:18:29.685351 2109039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:18:29.702857 2109039 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0819 12:18:29.712810 2109039 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:18:29.712858 2109039 ssh_runner.go:195] Run: ls
	I0819 12:18:29.717594 2109039 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:18:29.723666 2109039 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:18:29.723688 2109039 status.go:422] ha-584983 apiserver status = Running (err=<nil>)
	I0819 12:18:29.723698 2109039 status.go:257] ha-584983 status: &{Name:ha-584983 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:18:29.723715 2109039 status.go:255] checking status of ha-584983-m02 ...
	I0819 12:18:29.724031 2109039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:29.724066 2109039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:29.739763 2109039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34053
	I0819 12:18:29.740206 2109039 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:29.740763 2109039 main.go:141] libmachine: Using API Version  1
	I0819 12:18:29.740783 2109039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:29.741123 2109039 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:29.741331 2109039 main.go:141] libmachine: (ha-584983-m02) Calling .GetState
	I0819 12:18:29.742711 2109039 status.go:330] ha-584983-m02 host status = "Running" (err=<nil>)
	I0819 12:18:29.742730 2109039 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:18:29.743016 2109039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:29.743055 2109039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:29.757670 2109039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37333
	I0819 12:18:29.758049 2109039 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:29.758538 2109039 main.go:141] libmachine: Using API Version  1
	I0819 12:18:29.758564 2109039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:29.758866 2109039 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:29.759089 2109039 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:18:29.761816 2109039 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:18:29.762183 2109039 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:18:29.762207 2109039 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:18:29.762332 2109039 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:18:29.762641 2109039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:29.762675 2109039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:29.778067 2109039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44185
	I0819 12:18:29.778512 2109039 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:29.779025 2109039 main.go:141] libmachine: Using API Version  1
	I0819 12:18:29.779044 2109039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:29.779406 2109039 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:29.779618 2109039 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:18:29.779829 2109039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:18:29.779853 2109039 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:18:29.782925 2109039 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:18:29.783507 2109039 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:18:29.783533 2109039 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:18:29.783660 2109039 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:18:29.783849 2109039 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:18:29.784002 2109039 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:18:29.784135 2109039 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	W0819 12:18:48.240465 2109039 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.84:22: connect: no route to host
	W0819 12:18:48.240633 2109039 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	E0819 12:18:48.240663 2109039 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:18:48.240676 2109039 status.go:257] ha-584983-m02 status: &{Name:ha-584983-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 12:18:48.240719 2109039 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:18:48.240730 2109039 status.go:255] checking status of ha-584983-m03 ...
	I0819 12:18:48.241126 2109039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:48.241180 2109039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:48.256532 2109039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34951
	I0819 12:18:48.256946 2109039 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:48.257437 2109039 main.go:141] libmachine: Using API Version  1
	I0819 12:18:48.257455 2109039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:48.257762 2109039 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:48.258029 2109039 main.go:141] libmachine: (ha-584983-m03) Calling .GetState
	I0819 12:18:48.259594 2109039 status.go:330] ha-584983-m03 host status = "Running" (err=<nil>)
	I0819 12:18:48.259613 2109039 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:18:48.259903 2109039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:48.259937 2109039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:48.274967 2109039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34825
	I0819 12:18:48.275363 2109039 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:48.275870 2109039 main.go:141] libmachine: Using API Version  1
	I0819 12:18:48.275896 2109039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:48.276212 2109039 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:48.276421 2109039 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:18:48.278927 2109039 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:18:48.279392 2109039 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:18:48.279414 2109039 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:18:48.279598 2109039 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:18:48.279900 2109039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:48.279934 2109039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:48.295413 2109039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44573
	I0819 12:18:48.296075 2109039 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:48.296598 2109039 main.go:141] libmachine: Using API Version  1
	I0819 12:18:48.296625 2109039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:48.296924 2109039 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:48.297099 2109039 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:18:48.297283 2109039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:18:48.297307 2109039 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:18:48.300132 2109039 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:18:48.300598 2109039 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:18:48.300622 2109039 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:18:48.300768 2109039 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:18:48.300932 2109039 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:18:48.301092 2109039 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:18:48.301233 2109039 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:18:48.385774 2109039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:18:48.402908 2109039 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:18:48.402940 2109039 api_server.go:166] Checking apiserver status ...
	I0819 12:18:48.402979 2109039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:18:48.418577 2109039 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup
	W0819 12:18:48.428861 2109039 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:18:48.428916 2109039 ssh_runner.go:195] Run: ls
	I0819 12:18:48.433578 2109039 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:18:48.438002 2109039 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:18:48.438032 2109039 status.go:422] ha-584983-m03 apiserver status = Running (err=<nil>)
	I0819 12:18:48.438043 2109039 status.go:257] ha-584983-m03 status: &{Name:ha-584983-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:18:48.438061 2109039 status.go:255] checking status of ha-584983-m04 ...
	I0819 12:18:48.438377 2109039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:48.438416 2109039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:48.455200 2109039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0819 12:18:48.455729 2109039 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:48.456247 2109039 main.go:141] libmachine: Using API Version  1
	I0819 12:18:48.456271 2109039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:48.456641 2109039 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:48.456861 2109039 main.go:141] libmachine: (ha-584983-m04) Calling .GetState
	I0819 12:18:48.458667 2109039 status.go:330] ha-584983-m04 host status = "Running" (err=<nil>)
	I0819 12:18:48.458687 2109039 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:18:48.458974 2109039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:48.459029 2109039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:48.475098 2109039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0819 12:18:48.475535 2109039 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:48.476010 2109039 main.go:141] libmachine: Using API Version  1
	I0819 12:18:48.476031 2109039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:48.476386 2109039 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:48.476595 2109039 main.go:141] libmachine: (ha-584983-m04) Calling .GetIP
	I0819 12:18:48.479284 2109039 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:18:48.479697 2109039 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:18:48.479724 2109039 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:18:48.479835 2109039 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:18:48.480247 2109039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:48.480299 2109039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:48.495623 2109039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46321
	I0819 12:18:48.496009 2109039 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:48.496511 2109039 main.go:141] libmachine: Using API Version  1
	I0819 12:18:48.496530 2109039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:48.496850 2109039 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:48.497029 2109039 main.go:141] libmachine: (ha-584983-m04) Calling .DriverName
	I0819 12:18:48.497206 2109039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:18:48.497231 2109039 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHHostname
	I0819 12:18:48.499848 2109039 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:18:48.500325 2109039 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:18:48.500353 2109039 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:18:48.500489 2109039 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHPort
	I0819 12:18:48.500681 2109039 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHKeyPath
	I0819 12:18:48.500864 2109039 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHUsername
	I0819 12:18:48.501030 2109039 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m04/id_rsa Username:docker}
	I0819 12:18:48.584865 2109039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:18:48.604346 2109039 status.go:257] ha-584983-m04 status: &{Name:ha-584983-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-584983 -n ha-584983
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-584983 logs -n 25: (1.350351664s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-584983 cp ha-584983-m03:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3290211963/001/cp-test_ha-584983-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m03:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983:/home/docker/cp-test_ha-584983-m03_ha-584983.txt                       |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983 sudo cat                                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m03_ha-584983.txt                                 |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m03:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m02:/home/docker/cp-test_ha-584983-m03_ha-584983-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983-m02 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m03_ha-584983-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m03:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04:/home/docker/cp-test_ha-584983-m03_ha-584983-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983-m04 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m03_ha-584983-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-584983 cp testdata/cp-test.txt                                                | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3290211963/001/cp-test_ha-584983-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983:/home/docker/cp-test_ha-584983-m04_ha-584983.txt                       |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983 sudo cat                                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m04_ha-584983.txt                                 |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m02:/home/docker/cp-test_ha-584983-m04_ha-584983-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983-m02 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m04_ha-584983-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m03:/home/docker/cp-test_ha-584983-m04_ha-584983-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983-m03 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m04_ha-584983-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-584983 node stop m02 -v=7                                                     | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:12:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:12:06.992137 2104204 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:12:06.992436 2104204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:12:06.992446 2104204 out.go:358] Setting ErrFile to fd 2...
	I0819 12:12:06.992450 2104204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:12:06.992597 2104204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:12:06.993153 2104204 out.go:352] Setting JSON to false
	I0819 12:12:06.994070 2104204 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":161677,"bootTime":1723907850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:12:06.994130 2104204 start.go:139] virtualization: kvm guest
	I0819 12:12:06.996118 2104204 out.go:177] * [ha-584983] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:12:06.997456 2104204 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 12:12:06.997502 2104204 notify.go:220] Checking for updates...
	I0819 12:12:06.999710 2104204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:12:07.000781 2104204 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:12:07.001857 2104204 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:12:07.002961 2104204 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:12:07.004066 2104204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:12:07.005400 2104204 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:12:07.040832 2104204 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 12:12:07.041861 2104204 start.go:297] selected driver: kvm2
	I0819 12:12:07.041879 2104204 start.go:901] validating driver "kvm2" against <nil>
	I0819 12:12:07.041891 2104204 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:12:07.042563 2104204 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:12:07.042641 2104204 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 12:12:07.058177 2104204 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 12:12:07.058230 2104204 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 12:12:07.058454 2104204 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:12:07.058487 2104204 cni.go:84] Creating CNI manager for ""
	I0819 12:12:07.058497 2104204 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 12:12:07.058502 2104204 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 12:12:07.058551 2104204 start.go:340] cluster config:
	{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 12:12:07.058667 2104204 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:12:07.060465 2104204 out.go:177] * Starting "ha-584983" primary control-plane node in "ha-584983" cluster
	I0819 12:12:07.061680 2104204 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:12:07.061729 2104204 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 12:12:07.061744 2104204 cache.go:56] Caching tarball of preloaded images
	I0819 12:12:07.061902 2104204 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:12:07.061922 2104204 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 12:12:07.062389 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:12:07.062422 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json: {Name:mk6e05058afcdcc1e7287821cd2e620cfeeb5720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:07.062609 2104204 start.go:360] acquireMachinesLock for ha-584983: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:12:07.062650 2104204 start.go:364] duration metric: took 24.193µs to acquireMachinesLock for "ha-584983"
	I0819 12:12:07.062672 2104204 start.go:93] Provisioning new machine with config: &{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:12:07.062735 2104204 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 12:12:07.065021 2104204 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 12:12:07.065200 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:12:07.065247 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:12:07.080779 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45945
	I0819 12:12:07.081225 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:12:07.081879 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:12:07.081900 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:12:07.082225 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:12:07.082428 2104204 main.go:141] libmachine: (ha-584983) Calling .GetMachineName
	I0819 12:12:07.082565 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:07.082736 2104204 start.go:159] libmachine.API.Create for "ha-584983" (driver="kvm2")
	I0819 12:12:07.082771 2104204 client.go:168] LocalClient.Create starting
	I0819 12:12:07.082812 2104204 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem
	I0819 12:12:07.082856 2104204 main.go:141] libmachine: Decoding PEM data...
	I0819 12:12:07.082873 2104204 main.go:141] libmachine: Parsing certificate...
	I0819 12:12:07.082931 2104204 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem
	I0819 12:12:07.082951 2104204 main.go:141] libmachine: Decoding PEM data...
	I0819 12:12:07.082961 2104204 main.go:141] libmachine: Parsing certificate...
	I0819 12:12:07.082978 2104204 main.go:141] libmachine: Running pre-create checks...
	I0819 12:12:07.082991 2104204 main.go:141] libmachine: (ha-584983) Calling .PreCreateCheck
	I0819 12:12:07.083300 2104204 main.go:141] libmachine: (ha-584983) Calling .GetConfigRaw
	I0819 12:12:07.083711 2104204 main.go:141] libmachine: Creating machine...
	I0819 12:12:07.083728 2104204 main.go:141] libmachine: (ha-584983) Calling .Create
	I0819 12:12:07.083850 2104204 main.go:141] libmachine: (ha-584983) Creating KVM machine...
	I0819 12:12:07.085716 2104204 main.go:141] libmachine: (ha-584983) DBG | found existing default KVM network
	I0819 12:12:07.086383 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:07.086263 2104227 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d990}
	I0819 12:12:07.086448 2104204 main.go:141] libmachine: (ha-584983) DBG | created network xml: 
	I0819 12:12:07.086477 2104204 main.go:141] libmachine: (ha-584983) DBG | <network>
	I0819 12:12:07.086485 2104204 main.go:141] libmachine: (ha-584983) DBG |   <name>mk-ha-584983</name>
	I0819 12:12:07.086490 2104204 main.go:141] libmachine: (ha-584983) DBG |   <dns enable='no'/>
	I0819 12:12:07.086496 2104204 main.go:141] libmachine: (ha-584983) DBG |   
	I0819 12:12:07.086502 2104204 main.go:141] libmachine: (ha-584983) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 12:12:07.086510 2104204 main.go:141] libmachine: (ha-584983) DBG |     <dhcp>
	I0819 12:12:07.086516 2104204 main.go:141] libmachine: (ha-584983) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 12:12:07.086524 2104204 main.go:141] libmachine: (ha-584983) DBG |     </dhcp>
	I0819 12:12:07.086530 2104204 main.go:141] libmachine: (ha-584983) DBG |   </ip>
	I0819 12:12:07.086541 2104204 main.go:141] libmachine: (ha-584983) DBG |   
	I0819 12:12:07.086555 2104204 main.go:141] libmachine: (ha-584983) DBG | </network>
	I0819 12:12:07.086571 2104204 main.go:141] libmachine: (ha-584983) DBG | 
	I0819 12:12:07.091516 2104204 main.go:141] libmachine: (ha-584983) DBG | trying to create private KVM network mk-ha-584983 192.168.39.0/24...
	I0819 12:12:07.165841 2104204 main.go:141] libmachine: (ha-584983) DBG | private KVM network mk-ha-584983 192.168.39.0/24 created
	I0819 12:12:07.165909 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:07.165790 2104227 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:12:07.165926 2104204 main.go:141] libmachine: (ha-584983) Setting up store path in /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983 ...
	I0819 12:12:07.165961 2104204 main.go:141] libmachine: (ha-584983) Building disk image from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 12:12:07.165980 2104204 main.go:141] libmachine: (ha-584983) Downloading /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 12:12:07.445397 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:07.445236 2104227 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa...
	I0819 12:12:07.630664 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:07.630541 2104227 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/ha-584983.rawdisk...
	I0819 12:12:07.630693 2104204 main.go:141] libmachine: (ha-584983) DBG | Writing magic tar header
	I0819 12:12:07.630704 2104204 main.go:141] libmachine: (ha-584983) DBG | Writing SSH key tar header
	I0819 12:12:07.630722 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:07.630658 2104227 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983 ...
	I0819 12:12:07.630734 2104204 main.go:141] libmachine: (ha-584983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983
	I0819 12:12:07.630805 2104204 main.go:141] libmachine: (ha-584983) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983 (perms=drwx------)
	I0819 12:12:07.630837 2104204 main.go:141] libmachine: (ha-584983) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines (perms=drwxr-xr-x)
	I0819 12:12:07.630848 2104204 main.go:141] libmachine: (ha-584983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines
	I0819 12:12:07.630858 2104204 main.go:141] libmachine: (ha-584983) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube (perms=drwxr-xr-x)
	I0819 12:12:07.630868 2104204 main.go:141] libmachine: (ha-584983) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935 (perms=drwxrwxr-x)
	I0819 12:12:07.630892 2104204 main.go:141] libmachine: (ha-584983) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 12:12:07.630905 2104204 main.go:141] libmachine: (ha-584983) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 12:12:07.630916 2104204 main.go:141] libmachine: (ha-584983) Creating domain...
	I0819 12:12:07.630926 2104204 main.go:141] libmachine: (ha-584983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:12:07.630943 2104204 main.go:141] libmachine: (ha-584983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935
	I0819 12:12:07.630951 2104204 main.go:141] libmachine: (ha-584983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 12:12:07.630957 2104204 main.go:141] libmachine: (ha-584983) DBG | Checking permissions on dir: /home/jenkins
	I0819 12:12:07.630964 2104204 main.go:141] libmachine: (ha-584983) DBG | Checking permissions on dir: /home
	I0819 12:12:07.630972 2104204 main.go:141] libmachine: (ha-584983) DBG | Skipping /home - not owner
	I0819 12:12:07.632124 2104204 main.go:141] libmachine: (ha-584983) define libvirt domain using xml: 
	I0819 12:12:07.632157 2104204 main.go:141] libmachine: (ha-584983) <domain type='kvm'>
	I0819 12:12:07.632179 2104204 main.go:141] libmachine: (ha-584983)   <name>ha-584983</name>
	I0819 12:12:07.632194 2104204 main.go:141] libmachine: (ha-584983)   <memory unit='MiB'>2200</memory>
	I0819 12:12:07.632202 2104204 main.go:141] libmachine: (ha-584983)   <vcpu>2</vcpu>
	I0819 12:12:07.632210 2104204 main.go:141] libmachine: (ha-584983)   <features>
	I0819 12:12:07.632218 2104204 main.go:141] libmachine: (ha-584983)     <acpi/>
	I0819 12:12:07.632223 2104204 main.go:141] libmachine: (ha-584983)     <apic/>
	I0819 12:12:07.632228 2104204 main.go:141] libmachine: (ha-584983)     <pae/>
	I0819 12:12:07.632239 2104204 main.go:141] libmachine: (ha-584983)     
	I0819 12:12:07.632248 2104204 main.go:141] libmachine: (ha-584983)   </features>
	I0819 12:12:07.632254 2104204 main.go:141] libmachine: (ha-584983)   <cpu mode='host-passthrough'>
	I0819 12:12:07.632262 2104204 main.go:141] libmachine: (ha-584983)   
	I0819 12:12:07.632273 2104204 main.go:141] libmachine: (ha-584983)   </cpu>
	I0819 12:12:07.632282 2104204 main.go:141] libmachine: (ha-584983)   <os>
	I0819 12:12:07.632289 2104204 main.go:141] libmachine: (ha-584983)     <type>hvm</type>
	I0819 12:12:07.632301 2104204 main.go:141] libmachine: (ha-584983)     <boot dev='cdrom'/>
	I0819 12:12:07.632309 2104204 main.go:141] libmachine: (ha-584983)     <boot dev='hd'/>
	I0819 12:12:07.632317 2104204 main.go:141] libmachine: (ha-584983)     <bootmenu enable='no'/>
	I0819 12:12:07.632325 2104204 main.go:141] libmachine: (ha-584983)   </os>
	I0819 12:12:07.632330 2104204 main.go:141] libmachine: (ha-584983)   <devices>
	I0819 12:12:07.632338 2104204 main.go:141] libmachine: (ha-584983)     <disk type='file' device='cdrom'>
	I0819 12:12:07.632370 2104204 main.go:141] libmachine: (ha-584983)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/boot2docker.iso'/>
	I0819 12:12:07.632395 2104204 main.go:141] libmachine: (ha-584983)       <target dev='hdc' bus='scsi'/>
	I0819 12:12:07.632406 2104204 main.go:141] libmachine: (ha-584983)       <readonly/>
	I0819 12:12:07.632415 2104204 main.go:141] libmachine: (ha-584983)     </disk>
	I0819 12:12:07.632427 2104204 main.go:141] libmachine: (ha-584983)     <disk type='file' device='disk'>
	I0819 12:12:07.632440 2104204 main.go:141] libmachine: (ha-584983)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 12:12:07.632457 2104204 main.go:141] libmachine: (ha-584983)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/ha-584983.rawdisk'/>
	I0819 12:12:07.632472 2104204 main.go:141] libmachine: (ha-584983)       <target dev='hda' bus='virtio'/>
	I0819 12:12:07.632484 2104204 main.go:141] libmachine: (ha-584983)     </disk>
	I0819 12:12:07.632514 2104204 main.go:141] libmachine: (ha-584983)     <interface type='network'>
	I0819 12:12:07.632528 2104204 main.go:141] libmachine: (ha-584983)       <source network='mk-ha-584983'/>
	I0819 12:12:07.632537 2104204 main.go:141] libmachine: (ha-584983)       <model type='virtio'/>
	I0819 12:12:07.632565 2104204 main.go:141] libmachine: (ha-584983)     </interface>
	I0819 12:12:07.632577 2104204 main.go:141] libmachine: (ha-584983)     <interface type='network'>
	I0819 12:12:07.632642 2104204 main.go:141] libmachine: (ha-584983)       <source network='default'/>
	I0819 12:12:07.632669 2104204 main.go:141] libmachine: (ha-584983)       <model type='virtio'/>
	I0819 12:12:07.632682 2104204 main.go:141] libmachine: (ha-584983)     </interface>
	I0819 12:12:07.632692 2104204 main.go:141] libmachine: (ha-584983)     <serial type='pty'>
	I0819 12:12:07.632704 2104204 main.go:141] libmachine: (ha-584983)       <target port='0'/>
	I0819 12:12:07.632713 2104204 main.go:141] libmachine: (ha-584983)     </serial>
	I0819 12:12:07.632725 2104204 main.go:141] libmachine: (ha-584983)     <console type='pty'>
	I0819 12:12:07.632742 2104204 main.go:141] libmachine: (ha-584983)       <target type='serial' port='0'/>
	I0819 12:12:07.632753 2104204 main.go:141] libmachine: (ha-584983)     </console>
	I0819 12:12:07.632763 2104204 main.go:141] libmachine: (ha-584983)     <rng model='virtio'>
	I0819 12:12:07.632773 2104204 main.go:141] libmachine: (ha-584983)       <backend model='random'>/dev/random</backend>
	I0819 12:12:07.632783 2104204 main.go:141] libmachine: (ha-584983)     </rng>
	I0819 12:12:07.632792 2104204 main.go:141] libmachine: (ha-584983)     
	I0819 12:12:07.632799 2104204 main.go:141] libmachine: (ha-584983)     
	I0819 12:12:07.632828 2104204 main.go:141] libmachine: (ha-584983)   </devices>
	I0819 12:12:07.632853 2104204 main.go:141] libmachine: (ha-584983) </domain>
	I0819 12:12:07.632880 2104204 main.go:141] libmachine: (ha-584983) 
	I0819 12:12:07.636985 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:f0:e6:49 in network default
	I0819 12:12:07.637584 2104204 main.go:141] libmachine: (ha-584983) Ensuring networks are active...
	I0819 12:12:07.637606 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:07.638343 2104204 main.go:141] libmachine: (ha-584983) Ensuring network default is active
	I0819 12:12:07.638624 2104204 main.go:141] libmachine: (ha-584983) Ensuring network mk-ha-584983 is active
	I0819 12:12:07.639038 2104204 main.go:141] libmachine: (ha-584983) Getting domain xml...
	I0819 12:12:07.639762 2104204 main.go:141] libmachine: (ha-584983) Creating domain...
	I0819 12:12:08.833673 2104204 main.go:141] libmachine: (ha-584983) Waiting to get IP...
	I0819 12:12:08.834563 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:08.834940 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:08.834984 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:08.834933 2104227 retry.go:31] will retry after 248.222805ms: waiting for machine to come up
	I0819 12:12:09.084568 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:09.085027 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:09.085059 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:09.084975 2104227 retry.go:31] will retry after 317.039873ms: waiting for machine to come up
	I0819 12:12:09.403400 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:09.403888 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:09.403912 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:09.403836 2104227 retry.go:31] will retry after 411.043815ms: waiting for machine to come up
	I0819 12:12:09.816372 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:09.816783 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:09.816839 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:09.816734 2104227 retry.go:31] will retry after 479.128255ms: waiting for machine to come up
	I0819 12:12:10.297403 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:10.297847 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:10.297871 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:10.297796 2104227 retry.go:31] will retry after 639.683545ms: waiting for machine to come up
	I0819 12:12:10.938598 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:10.939036 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:10.939060 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:10.938987 2104227 retry.go:31] will retry after 932.901833ms: waiting for machine to come up
	I0819 12:12:11.873122 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:11.873585 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:11.873636 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:11.873533 2104227 retry.go:31] will retry after 1.067845741s: waiting for machine to come up
	I0819 12:12:12.943413 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:12.943841 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:12.943887 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:12.943813 2104227 retry.go:31] will retry after 1.213836701s: waiting for machine to come up
	I0819 12:12:14.159270 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:14.159638 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:14.159677 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:14.159607 2104227 retry.go:31] will retry after 1.173001156s: waiting for machine to come up
	I0819 12:12:15.333975 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:15.334376 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:15.334400 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:15.334337 2104227 retry.go:31] will retry after 1.790531734s: waiting for machine to come up
	I0819 12:12:17.126466 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:17.126874 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:17.126899 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:17.126828 2104227 retry.go:31] will retry after 2.177399011s: waiting for machine to come up
	I0819 12:12:19.307163 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:19.307480 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:19.307509 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:19.307433 2104227 retry.go:31] will retry after 3.635463931s: waiting for machine to come up
	I0819 12:12:22.944987 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:22.945311 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:22.945346 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:22.945247 2104227 retry.go:31] will retry after 4.489328535s: waiting for machine to come up
	I0819 12:12:27.435618 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.436062 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has current primary IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.436087 2104204 main.go:141] libmachine: (ha-584983) Found IP for machine: 192.168.39.12
	I0819 12:12:27.436100 2104204 main.go:141] libmachine: (ha-584983) Reserving static IP address...
	I0819 12:12:27.436457 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find host DHCP lease matching {name: "ha-584983", mac: "52:54:00:be:50:ea", ip: "192.168.39.12"} in network mk-ha-584983
	I0819 12:12:27.516487 2104204 main.go:141] libmachine: (ha-584983) DBG | Getting to WaitForSSH function...
	I0819 12:12:27.516524 2104204 main.go:141] libmachine: (ha-584983) Reserved static IP address: 192.168.39.12
	I0819 12:12:27.516541 2104204 main.go:141] libmachine: (ha-584983) Waiting for SSH to be available...
	I0819 12:12:27.518982 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.519444 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:27.519534 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.519580 2104204 main.go:141] libmachine: (ha-584983) DBG | Using SSH client type: external
	I0819 12:12:27.519611 2104204 main.go:141] libmachine: (ha-584983) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa (-rw-------)
	I0819 12:12:27.519653 2104204 main.go:141] libmachine: (ha-584983) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.12 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 12:12:27.519674 2104204 main.go:141] libmachine: (ha-584983) DBG | About to run SSH command:
	I0819 12:12:27.519688 2104204 main.go:141] libmachine: (ha-584983) DBG | exit 0
	I0819 12:12:27.648019 2104204 main.go:141] libmachine: (ha-584983) DBG | SSH cmd err, output: <nil>: 
	I0819 12:12:27.648244 2104204 main.go:141] libmachine: (ha-584983) KVM machine creation complete!
	I0819 12:12:27.648590 2104204 main.go:141] libmachine: (ha-584983) Calling .GetConfigRaw
	I0819 12:12:27.649208 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:27.649421 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:27.649574 2104204 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 12:12:27.649592 2104204 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:12:27.650845 2104204 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 12:12:27.650862 2104204 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 12:12:27.650868 2104204 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 12:12:27.650874 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:27.653255 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.653623 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:27.653646 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.653780 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:27.653959 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:27.654110 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:27.654232 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:27.654407 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:12:27.654661 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:12:27.654680 2104204 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 12:12:27.763257 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:12:27.763287 2104204 main.go:141] libmachine: Detecting the provisioner...
	I0819 12:12:27.763299 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:27.765800 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.766163 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:27.766192 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.766304 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:27.766520 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:27.766710 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:27.766819 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:27.766996 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:12:27.767200 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:12:27.767219 2104204 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 12:12:27.877123 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 12:12:27.877255 2104204 main.go:141] libmachine: found compatible host: buildroot
	I0819 12:12:27.877271 2104204 main.go:141] libmachine: Provisioning with buildroot...
	I0819 12:12:27.877282 2104204 main.go:141] libmachine: (ha-584983) Calling .GetMachineName
	I0819 12:12:27.877558 2104204 buildroot.go:166] provisioning hostname "ha-584983"
	I0819 12:12:27.877584 2104204 main.go:141] libmachine: (ha-584983) Calling .GetMachineName
	I0819 12:12:27.877777 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:27.880426 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.880778 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:27.880808 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.880930 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:27.881120 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:27.881235 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:27.881399 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:27.881566 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:12:27.881733 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:12:27.881744 2104204 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-584983 && echo "ha-584983" | sudo tee /etc/hostname
	I0819 12:12:28.006185 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-584983
	
	I0819 12:12:28.006215 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.009165 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.009494 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.009523 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.009683 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:28.009906 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.010046 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.010208 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:28.010362 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:12:28.010578 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:12:28.010595 2104204 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-584983' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-584983/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-584983' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:12:28.129393 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:12:28.129429 2104204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 12:12:28.129452 2104204 buildroot.go:174] setting up certificates
	I0819 12:12:28.129468 2104204 provision.go:84] configureAuth start
	I0819 12:12:28.129481 2104204 main.go:141] libmachine: (ha-584983) Calling .GetMachineName
	I0819 12:12:28.129854 2104204 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:12:28.132834 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.133216 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.133246 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.133376 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.135812 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.136109 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.136179 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.136279 2104204 provision.go:143] copyHostCerts
	I0819 12:12:28.136316 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:12:28.136363 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 12:12:28.136392 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:12:28.136474 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 12:12:28.136606 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:12:28.136646 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 12:12:28.136656 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:12:28.136699 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 12:12:28.136777 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:12:28.136801 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 12:12:28.136808 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:12:28.136843 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 12:12:28.136928 2104204 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.ha-584983 san=[127.0.0.1 192.168.39.12 ha-584983 localhost minikube]
	I0819 12:12:28.201240 2104204 provision.go:177] copyRemoteCerts
	I0819 12:12:28.201319 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:12:28.201349 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.204320 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.204670 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.204712 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.204846 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:28.205067 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.205216 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:28.205320 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:12:28.290311 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 12:12:28.290389 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 12:12:28.313988 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 12:12:28.314062 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 12:12:28.337418 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 12:12:28.337488 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 12:12:28.360055 2104204 provision.go:87] duration metric: took 230.571098ms to configureAuth
	I0819 12:12:28.360086 2104204 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:12:28.360334 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:12:28.360451 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.363133 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.363508 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.363540 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.363701 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:28.363906 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.364089 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.364254 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:28.364416 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:12:28.364615 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:12:28.364629 2104204 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:12:28.637654 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:12:28.637686 2104204 main.go:141] libmachine: Checking connection to Docker...
	I0819 12:12:28.637697 2104204 main.go:141] libmachine: (ha-584983) Calling .GetURL
	I0819 12:12:28.639100 2104204 main.go:141] libmachine: (ha-584983) DBG | Using libvirt version 6000000
	I0819 12:12:28.641202 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.641520 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.641545 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.641761 2104204 main.go:141] libmachine: Docker is up and running!
	I0819 12:12:28.641777 2104204 main.go:141] libmachine: Reticulating splines...
	I0819 12:12:28.641785 2104204 client.go:171] duration metric: took 21.559002125s to LocalClient.Create
	I0819 12:12:28.641807 2104204 start.go:167] duration metric: took 21.559075213s to libmachine.API.Create "ha-584983"
	I0819 12:12:28.641816 2104204 start.go:293] postStartSetup for "ha-584983" (driver="kvm2")
	I0819 12:12:28.641825 2104204 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:12:28.641842 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:28.642076 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:12:28.642101 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.644100 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.644418 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.644444 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.644599 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:28.644796 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.644962 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:28.645085 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:12:28.730178 2104204 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:12:28.734149 2104204 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:12:28.734186 2104204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 12:12:28.734251 2104204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 12:12:28.734345 2104204 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 12:12:28.734357 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /etc/ssl/certs/20941932.pem
	I0819 12:12:28.734496 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:12:28.743850 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:12:28.766088 2104204 start.go:296] duration metric: took 124.26097ms for postStartSetup
	I0819 12:12:28.766136 2104204 main.go:141] libmachine: (ha-584983) Calling .GetConfigRaw
	I0819 12:12:28.766751 2104204 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:12:28.769288 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.769618 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.769652 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.769890 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:12:28.770080 2104204 start.go:128] duration metric: took 21.707330197s to createHost
	I0819 12:12:28.770103 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.772135 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.772486 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.772515 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.772646 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:28.772834 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.773024 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.773155 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:28.773324 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:12:28.773491 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:12:28.773501 2104204 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:12:28.884734 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724069548.858637417
	
	I0819 12:12:28.884764 2104204 fix.go:216] guest clock: 1724069548.858637417
	I0819 12:12:28.884772 2104204 fix.go:229] Guest: 2024-08-19 12:12:28.858637417 +0000 UTC Remote: 2024-08-19 12:12:28.770092271 +0000 UTC m=+21.812458134 (delta=88.545146ms)
	I0819 12:12:28.884793 2104204 fix.go:200] guest clock delta is within tolerance: 88.545146ms
	I0819 12:12:28.884799 2104204 start.go:83] releasing machines lock for "ha-584983", held for 21.822138177s
	I0819 12:12:28.884819 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:28.885096 2104204 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:12:28.887707 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.888032 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.888060 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.888208 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:28.888685 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:28.888861 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:28.888970 2104204 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:12:28.889006 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.889141 2104204 ssh_runner.go:195] Run: cat /version.json
	I0819 12:12:28.889181 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.891594 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.891844 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.891871 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.891907 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.892104 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:28.892255 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.892275 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.892294 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.892438 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:28.892454 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:28.892582 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:12:28.892636 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.892769 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:28.892888 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:12:28.997153 2104204 ssh_runner.go:195] Run: systemctl --version
	I0819 12:12:29.002917 2104204 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:12:29.156072 2104204 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:12:29.162017 2104204 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:12:29.162086 2104204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:12:29.177501 2104204 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 12:12:29.177525 2104204 start.go:495] detecting cgroup driver to use...
	I0819 12:12:29.177584 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:12:29.193283 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:12:29.207112 2104204 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:12:29.207170 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:12:29.220108 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:12:29.233119 2104204 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:12:29.341287 2104204 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:12:29.490039 2104204 docker.go:233] disabling docker service ...
	I0819 12:12:29.490111 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:12:29.505142 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:12:29.517330 2104204 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:12:29.649463 2104204 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:12:29.771780 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:12:29.785363 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:12:29.803502 2104204 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:12:29.803567 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:12:29.813984 2104204 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:12:29.814088 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:12:29.824448 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:12:29.834691 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:12:29.845044 2104204 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:12:29.855416 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:12:29.865544 2104204 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:12:29.881836 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:12:29.891658 2104204 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:12:29.900933 2104204 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 12:12:29.900975 2104204 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 12:12:29.913701 2104204 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:12:29.922823 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:12:30.045585 2104204 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:12:30.170840 2104204 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:12:30.170911 2104204 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:12:30.175636 2104204 start.go:563] Will wait 60s for crictl version
	I0819 12:12:30.175706 2104204 ssh_runner.go:195] Run: which crictl
	I0819 12:12:30.179323 2104204 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:12:30.222715 2104204 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:12:30.222815 2104204 ssh_runner.go:195] Run: crio --version
	I0819 12:12:30.249893 2104204 ssh_runner.go:195] Run: crio --version
	I0819 12:12:30.278636 2104204 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:12:30.280057 2104204 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:12:30.282979 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:30.283313 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:30.283340 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:30.283535 2104204 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:12:30.287799 2104204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:12:30.300616 2104204 kubeadm.go:883] updating cluster {Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:12:30.300779 2104204 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:12:30.300844 2104204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:12:30.334470 2104204 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 12:12:30.334557 2104204 ssh_runner.go:195] Run: which lz4
	I0819 12:12:30.338591 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 12:12:30.338684 2104204 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 12:12:30.342667 2104204 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 12:12:30.342699 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 12:12:31.657563 2104204 crio.go:462] duration metric: took 1.318896291s to copy over tarball
	I0819 12:12:31.657659 2104204 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 12:12:33.637692 2104204 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.979997464s)
	I0819 12:12:33.637727 2104204 crio.go:469] duration metric: took 1.980129197s to extract the tarball
	I0819 12:12:33.637735 2104204 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 12:12:33.674912 2104204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:12:33.718314 2104204 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:12:33.718346 2104204 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:12:33.718356 2104204 kubeadm.go:934] updating node { 192.168.39.12 8443 v1.31.0 crio true true} ...
	I0819 12:12:33.718508 2104204 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-584983 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:12:33.718601 2104204 ssh_runner.go:195] Run: crio config
	I0819 12:12:33.768097 2104204 cni.go:84] Creating CNI manager for ""
	I0819 12:12:33.768130 2104204 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 12:12:33.768141 2104204 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:12:33.768189 2104204 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-584983 NodeName:ha-584983 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:12:33.768353 2104204 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-584983"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:12:33.768382 2104204 kube-vip.go:115] generating kube-vip config ...
	I0819 12:12:33.768437 2104204 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 12:12:33.786436 2104204 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 12:12:33.786552 2104204 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 12:12:33.786622 2104204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:12:33.796413 2104204 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:12:33.796493 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 12:12:33.805662 2104204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0819 12:12:33.821342 2104204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:12:33.836923 2104204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0819 12:12:33.852195 2104204 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0819 12:12:33.867701 2104204 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 12:12:33.871431 2104204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:12:33.883043 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:12:34.004866 2104204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:12:34.021983 2104204 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983 for IP: 192.168.39.12
	I0819 12:12:34.022017 2104204 certs.go:194] generating shared ca certs ...
	I0819 12:12:34.022040 2104204 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:34.022257 2104204 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 12:12:34.022325 2104204 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 12:12:34.022343 2104204 certs.go:256] generating profile certs ...
	I0819 12:12:34.022429 2104204 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key
	I0819 12:12:34.022450 2104204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.crt with IP's: []
	I0819 12:12:34.174427 2104204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.crt ...
	I0819 12:12:34.174457 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.crt: {Name:mk0d5583e764aa292baa802da2b1f9061f76f122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:34.174655 2104204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key ...
	I0819 12:12:34.174670 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key: {Name:mk0927d142c6782b7ebd5507529b65dfa62b7b72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:34.174780 2104204 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.b3cd3da7
	I0819 12:12:34.174805 2104204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.b3cd3da7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.254]
	I0819 12:12:34.336324 2104204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.b3cd3da7 ...
	I0819 12:12:34.336362 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.b3cd3da7: {Name:mka4fa0f2524582fec5a532ef8eb39a00889f2b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:34.336579 2104204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.b3cd3da7 ...
	I0819 12:12:34.336600 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.b3cd3da7: {Name:mk5d6efebe3cd14d5e4ded3578c0fe43fc37706c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:34.336709 2104204 certs.go:381] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.b3cd3da7 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt
	I0819 12:12:34.336846 2104204 certs.go:385] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.b3cd3da7 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key
	I0819 12:12:34.336936 2104204 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key
	I0819 12:12:34.336964 2104204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt with IP's: []
	I0819 12:12:34.466046 2104204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt ...
	I0819 12:12:34.466079 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt: {Name:mk02a7937a51bf8400ff714216cc25074371bf5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:34.466284 2104204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key ...
	I0819 12:12:34.466299 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key: {Name:mk8a56d868ab63bdee37415360c66158a7db1d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:34.466401 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 12:12:34.466426 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 12:12:34.466446 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 12:12:34.466464 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 12:12:34.466481 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 12:12:34.466499 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 12:12:34.466517 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 12:12:34.466535 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 12:12:34.466601 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 12:12:34.466649 2104204 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 12:12:34.466662 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:12:34.466695 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 12:12:34.466725 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:12:34.466753 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 12:12:34.466809 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:12:34.466849 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /usr/share/ca-certificates/20941932.pem
	I0819 12:12:34.466870 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:12:34.466887 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem -> /usr/share/ca-certificates/2094193.pem
	I0819 12:12:34.467611 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:12:34.491994 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:12:34.513771 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:12:34.535665 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 12:12:34.557422 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 12:12:34.579963 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 12:12:34.602946 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:12:34.625167 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:12:34.647462 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 12:12:34.669898 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:12:34.691695 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 12:12:34.713382 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:12:34.728768 2104204 ssh_runner.go:195] Run: openssl version
	I0819 12:12:34.734128 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 12:12:34.744551 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 12:12:34.748621 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 12:12:34.748671 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 12:12:34.754036 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:12:34.765040 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:12:34.778502 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:12:34.783318 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:12:34.783388 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:12:34.796906 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:12:34.812958 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 12:12:34.825837 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 12:12:34.832477 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 12:12:34.832531 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 12:12:34.845387 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 12:12:34.857084 2104204 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:12:34.860880 2104204 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 12:12:34.860934 2104204 kubeadm.go:392] StartCluster: {Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:12:34.861024 2104204 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:12:34.861084 2104204 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:12:34.901686 2104204 cri.go:89] found id: ""
	I0819 12:12:34.901761 2104204 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 12:12:34.911175 2104204 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 12:12:34.920357 2104204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 12:12:34.929252 2104204 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 12:12:34.929269 2104204 kubeadm.go:157] found existing configuration files:
	
	I0819 12:12:34.929315 2104204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 12:12:34.937722 2104204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 12:12:34.937765 2104204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 12:12:34.946542 2104204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 12:12:34.955043 2104204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 12:12:34.955099 2104204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 12:12:34.963816 2104204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 12:12:34.972237 2104204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 12:12:34.972279 2104204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 12:12:34.980981 2104204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 12:12:34.989383 2104204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 12:12:34.989434 2104204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 12:12:34.998105 2104204 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 12:12:35.109103 2104204 kubeadm.go:310] W0819 12:12:35.090621     850 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 12:12:35.109876 2104204 kubeadm.go:310] W0819 12:12:35.091392     850 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 12:12:35.207387 2104204 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 12:12:49.106079 2104204 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 12:12:49.106170 2104204 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 12:12:49.106272 2104204 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 12:12:49.106408 2104204 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 12:12:49.106538 2104204 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 12:12:49.106657 2104204 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 12:12:49.108333 2104204 out.go:235]   - Generating certificates and keys ...
	I0819 12:12:49.108438 2104204 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 12:12:49.108532 2104204 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 12:12:49.108635 2104204 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 12:12:49.108710 2104204 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 12:12:49.108811 2104204 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 12:12:49.108885 2104204 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 12:12:49.108963 2104204 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 12:12:49.109064 2104204 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-584983 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0819 12:12:49.109127 2104204 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 12:12:49.109246 2104204 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-584983 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0819 12:12:49.109303 2104204 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 12:12:49.109359 2104204 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 12:12:49.109398 2104204 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 12:12:49.109475 2104204 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 12:12:49.109575 2104204 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 12:12:49.109651 2104204 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 12:12:49.109719 2104204 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 12:12:49.109806 2104204 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 12:12:49.109883 2104204 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 12:12:49.109972 2104204 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 12:12:49.110056 2104204 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 12:12:49.111542 2104204 out.go:235]   - Booting up control plane ...
	I0819 12:12:49.111640 2104204 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 12:12:49.111729 2104204 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 12:12:49.111828 2104204 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 12:12:49.111926 2104204 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 12:12:49.112018 2104204 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 12:12:49.112074 2104204 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 12:12:49.112208 2104204 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 12:12:49.112343 2104204 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 12:12:49.112419 2104204 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002296484s
	I0819 12:12:49.112499 2104204 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 12:12:49.112573 2104204 kubeadm.go:310] [api-check] The API server is healthy after 8.770810911s
	I0819 12:12:49.112727 2104204 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 12:12:49.112875 2104204 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 12:12:49.112961 2104204 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 12:12:49.113134 2104204 kubeadm.go:310] [mark-control-plane] Marking the node ha-584983 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 12:12:49.113182 2104204 kubeadm.go:310] [bootstrap-token] Using token: w0w3gq.45woyd02sscpvoqz
	I0819 12:12:49.114464 2104204 out.go:235]   - Configuring RBAC rules ...
	I0819 12:12:49.114555 2104204 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 12:12:49.114624 2104204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 12:12:49.114755 2104204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 12:12:49.114867 2104204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 12:12:49.114958 2104204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 12:12:49.115086 2104204 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 12:12:49.115222 2104204 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 12:12:49.115264 2104204 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 12:12:49.115307 2104204 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 12:12:49.115313 2104204 kubeadm.go:310] 
	I0819 12:12:49.115374 2104204 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 12:12:49.115380 2104204 kubeadm.go:310] 
	I0819 12:12:49.115465 2104204 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 12:12:49.115472 2104204 kubeadm.go:310] 
	I0819 12:12:49.115501 2104204 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 12:12:49.115568 2104204 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 12:12:49.115611 2104204 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 12:12:49.115620 2104204 kubeadm.go:310] 
	I0819 12:12:49.115666 2104204 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 12:12:49.115671 2104204 kubeadm.go:310] 
	I0819 12:12:49.115722 2104204 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 12:12:49.115728 2104204 kubeadm.go:310] 
	I0819 12:12:49.115779 2104204 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 12:12:49.115849 2104204 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 12:12:49.115914 2104204 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 12:12:49.115932 2104204 kubeadm.go:310] 
	I0819 12:12:49.116029 2104204 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 12:12:49.116097 2104204 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 12:12:49.116105 2104204 kubeadm.go:310] 
	I0819 12:12:49.116204 2104204 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w0w3gq.45woyd02sscpvoqz \
	I0819 12:12:49.116320 2104204 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 12:12:49.116361 2104204 kubeadm.go:310] 	--control-plane 
	I0819 12:12:49.116369 2104204 kubeadm.go:310] 
	I0819 12:12:49.116439 2104204 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 12:12:49.116446 2104204 kubeadm.go:310] 
	I0819 12:12:49.116520 2104204 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w0w3gq.45woyd02sscpvoqz \
	I0819 12:12:49.116645 2104204 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 12:12:49.116667 2104204 cni.go:84] Creating CNI manager for ""
	I0819 12:12:49.116678 2104204 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 12:12:49.118040 2104204 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 12:12:49.119156 2104204 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 12:12:49.124373 2104204 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 12:12:49.124398 2104204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 12:12:49.144567 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 12:12:49.506592 2104204 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 12:12:49.506695 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:12:49.506717 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-584983 minikube.k8s.io/updated_at=2024_08_19T12_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=ha-584983 minikube.k8s.io/primary=true
	I0819 12:12:49.552914 2104204 ops.go:34] apiserver oom_adj: -16
	I0819 12:12:49.759456 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:12:50.260269 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:12:50.759732 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:12:51.260179 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:12:51.397809 2104204 kubeadm.go:1113] duration metric: took 1.891185053s to wait for elevateKubeSystemPrivileges
	I0819 12:12:51.397851 2104204 kubeadm.go:394] duration metric: took 16.536920385s to StartCluster
	I0819 12:12:51.397877 2104204 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:51.397972 2104204 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:12:51.398984 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:51.399242 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 12:12:51.399257 2104204 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:12:51.399286 2104204 start.go:241] waiting for startup goroutines ...
	I0819 12:12:51.399296 2104204 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 12:12:51.399365 2104204 addons.go:69] Setting default-storageclass=true in profile "ha-584983"
	I0819 12:12:51.399370 2104204 addons.go:69] Setting storage-provisioner=true in profile "ha-584983"
	I0819 12:12:51.399407 2104204 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-584983"
	I0819 12:12:51.399412 2104204 addons.go:234] Setting addon storage-provisioner=true in "ha-584983"
	I0819 12:12:51.399444 2104204 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:12:51.399477 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:12:51.399840 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:12:51.399869 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:12:51.399900 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:12:51.399948 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:12:51.421032 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I0819 12:12:51.421185 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35655
	I0819 12:12:51.421626 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:12:51.421629 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:12:51.422205 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:12:51.422208 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:12:51.422238 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:12:51.422254 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:12:51.422625 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:12:51.422631 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:12:51.422894 2104204 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:12:51.423233 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:12:51.423266 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:12:51.424995 2104204 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:12:51.425231 2104204 kapi.go:59] client config for ha-584983: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.crt", KeyFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key", CAFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 12:12:51.425677 2104204 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 12:12:51.425960 2104204 addons.go:234] Setting addon default-storageclass=true in "ha-584983"
	I0819 12:12:51.426008 2104204 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:12:51.426274 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:12:51.426322 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:12:51.438922 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43535
	I0819 12:12:51.439388 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:12:51.439922 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:12:51.439948 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:12:51.440311 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:12:51.440517 2104204 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:12:51.440666 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43435
	I0819 12:12:51.440974 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:12:51.441403 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:12:51.441426 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:12:51.441721 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:12:51.442412 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:12:51.442452 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:12:51.442669 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:51.444383 2104204 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:12:51.445505 2104204 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:12:51.445521 2104204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 12:12:51.445542 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:51.448647 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:51.449119 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:51.449149 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:51.449280 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:51.449477 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:51.449641 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:51.449777 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:12:51.457718 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0819 12:12:51.458180 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:12:51.458622 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:12:51.458645 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:12:51.458984 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:12:51.459183 2104204 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:12:51.460752 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:51.460983 2104204 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 12:12:51.461003 2104204 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 12:12:51.461022 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:51.463519 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:51.463925 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:51.463942 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:51.464113 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:51.464285 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:51.464417 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:51.464560 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:12:51.560963 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 12:12:51.633328 2104204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 12:12:51.663129 2104204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:12:51.971457 2104204 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 12:12:51.971536 2104204 main.go:141] libmachine: Making call to close driver server
	I0819 12:12:51.971567 2104204 main.go:141] libmachine: (ha-584983) Calling .Close
	I0819 12:12:51.971890 2104204 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:12:51.971913 2104204 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:12:51.971926 2104204 main.go:141] libmachine: Making call to close driver server
	I0819 12:12:51.971936 2104204 main.go:141] libmachine: (ha-584983) Calling .Close
	I0819 12:12:51.971944 2104204 main.go:141] libmachine: (ha-584983) DBG | Closing plugin on server side
	I0819 12:12:51.972219 2104204 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:12:51.972241 2104204 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:12:51.972302 2104204 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 12:12:51.972334 2104204 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 12:12:51.972447 2104204 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 12:12:51.972458 2104204 round_trippers.go:469] Request Headers:
	I0819 12:12:51.972470 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:12:51.972477 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:12:51.983689 2104204 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0819 12:12:51.984331 2104204 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 12:12:51.984347 2104204 round_trippers.go:469] Request Headers:
	I0819 12:12:51.984363 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:12:51.984368 2104204 round_trippers.go:473]     Content-Type: application/json
	I0819 12:12:51.984374 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:12:51.986585 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:12:51.986749 2104204 main.go:141] libmachine: Making call to close driver server
	I0819 12:12:51.986765 2104204 main.go:141] libmachine: (ha-584983) Calling .Close
	I0819 12:12:51.987029 2104204 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:12:51.987051 2104204 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:12:52.163748 2104204 main.go:141] libmachine: Making call to close driver server
	I0819 12:12:52.163774 2104204 main.go:141] libmachine: (ha-584983) Calling .Close
	I0819 12:12:52.164125 2104204 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:12:52.164149 2104204 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:12:52.164172 2104204 main.go:141] libmachine: Making call to close driver server
	I0819 12:12:52.164177 2104204 main.go:141] libmachine: (ha-584983) DBG | Closing plugin on server side
	I0819 12:12:52.164182 2104204 main.go:141] libmachine: (ha-584983) Calling .Close
	I0819 12:12:52.164499 2104204 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:12:52.164595 2104204 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:12:52.164567 2104204 main.go:141] libmachine: (ha-584983) DBG | Closing plugin on server side
	I0819 12:12:52.165890 2104204 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 12:12:52.166840 2104204 addons.go:510] duration metric: took 767.541376ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 12:12:52.166878 2104204 start.go:246] waiting for cluster config update ...
	I0819 12:12:52.166891 2104204 start.go:255] writing updated cluster config ...
	I0819 12:12:52.168097 2104204 out.go:201] 
	I0819 12:12:52.169294 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:12:52.169394 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:12:52.170928 2104204 out.go:177] * Starting "ha-584983-m02" control-plane node in "ha-584983" cluster
	I0819 12:12:52.172185 2104204 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:12:52.172212 2104204 cache.go:56] Caching tarball of preloaded images
	I0819 12:12:52.172335 2104204 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:12:52.172354 2104204 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 12:12:52.172451 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:12:52.172672 2104204 start.go:360] acquireMachinesLock for ha-584983-m02: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:12:52.172737 2104204 start.go:364] duration metric: took 35.378µs to acquireMachinesLock for "ha-584983-m02"
	I0819 12:12:52.172765 2104204 start.go:93] Provisioning new machine with config: &{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:12:52.172870 2104204 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0819 12:12:52.174878 2104204 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 12:12:52.174966 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:12:52.174989 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:12:52.190089 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0819 12:12:52.190526 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:12:52.190957 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:12:52.190981 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:12:52.191295 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:12:52.191483 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetMachineName
	I0819 12:12:52.191630 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:12:52.191765 2104204 start.go:159] libmachine.API.Create for "ha-584983" (driver="kvm2")
	I0819 12:12:52.191796 2104204 client.go:168] LocalClient.Create starting
	I0819 12:12:52.191831 2104204 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem
	I0819 12:12:52.191873 2104204 main.go:141] libmachine: Decoding PEM data...
	I0819 12:12:52.191888 2104204 main.go:141] libmachine: Parsing certificate...
	I0819 12:12:52.191938 2104204 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem
	I0819 12:12:52.191961 2104204 main.go:141] libmachine: Decoding PEM data...
	I0819 12:12:52.191973 2104204 main.go:141] libmachine: Parsing certificate...
	I0819 12:12:52.191991 2104204 main.go:141] libmachine: Running pre-create checks...
	I0819 12:12:52.191999 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .PreCreateCheck
	I0819 12:12:52.192194 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetConfigRaw
	I0819 12:12:52.192614 2104204 main.go:141] libmachine: Creating machine...
	I0819 12:12:52.192633 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .Create
	I0819 12:12:52.192771 2104204 main.go:141] libmachine: (ha-584983-m02) Creating KVM machine...
	I0819 12:12:52.193817 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found existing default KVM network
	I0819 12:12:52.193975 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found existing private KVM network mk-ha-584983
	I0819 12:12:52.194093 2104204 main.go:141] libmachine: (ha-584983-m02) Setting up store path in /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02 ...
	I0819 12:12:52.194120 2104204 main.go:141] libmachine: (ha-584983-m02) Building disk image from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 12:12:52.194165 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:52.194070 2104574 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:12:52.194291 2104204 main.go:141] libmachine: (ha-584983-m02) Downloading /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 12:12:52.456627 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:52.456462 2104574 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa...
	I0819 12:12:52.631668 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:52.631547 2104574 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/ha-584983-m02.rawdisk...
	I0819 12:12:52.631700 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Writing magic tar header
	I0819 12:12:52.631712 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Writing SSH key tar header
	I0819 12:12:52.633049 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:52.632918 2104574 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02 ...
	I0819 12:12:52.633168 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02
	I0819 12:12:52.633198 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines
	I0819 12:12:52.633213 2104204 main.go:141] libmachine: (ha-584983-m02) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02 (perms=drwx------)
	I0819 12:12:52.633228 2104204 main.go:141] libmachine: (ha-584983-m02) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines (perms=drwxr-xr-x)
	I0819 12:12:52.633239 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:12:52.633258 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935
	I0819 12:12:52.633273 2104204 main.go:141] libmachine: (ha-584983-m02) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube (perms=drwxr-xr-x)
	I0819 12:12:52.633286 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 12:12:52.633298 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Checking permissions on dir: /home/jenkins
	I0819 12:12:52.633309 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Checking permissions on dir: /home
	I0819 12:12:52.633320 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Skipping /home - not owner
	I0819 12:12:52.633336 2104204 main.go:141] libmachine: (ha-584983-m02) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935 (perms=drwxrwxr-x)
	I0819 12:12:52.633345 2104204 main.go:141] libmachine: (ha-584983-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 12:12:52.633356 2104204 main.go:141] libmachine: (ha-584983-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 12:12:52.633363 2104204 main.go:141] libmachine: (ha-584983-m02) Creating domain...
	I0819 12:12:52.634304 2104204 main.go:141] libmachine: (ha-584983-m02) define libvirt domain using xml: 
	I0819 12:12:52.634328 2104204 main.go:141] libmachine: (ha-584983-m02) <domain type='kvm'>
	I0819 12:12:52.634337 2104204 main.go:141] libmachine: (ha-584983-m02)   <name>ha-584983-m02</name>
	I0819 12:12:52.634348 2104204 main.go:141] libmachine: (ha-584983-m02)   <memory unit='MiB'>2200</memory>
	I0819 12:12:52.634358 2104204 main.go:141] libmachine: (ha-584983-m02)   <vcpu>2</vcpu>
	I0819 12:12:52.634368 2104204 main.go:141] libmachine: (ha-584983-m02)   <features>
	I0819 12:12:52.634380 2104204 main.go:141] libmachine: (ha-584983-m02)     <acpi/>
	I0819 12:12:52.634389 2104204 main.go:141] libmachine: (ha-584983-m02)     <apic/>
	I0819 12:12:52.634399 2104204 main.go:141] libmachine: (ha-584983-m02)     <pae/>
	I0819 12:12:52.634408 2104204 main.go:141] libmachine: (ha-584983-m02)     
	I0819 12:12:52.634419 2104204 main.go:141] libmachine: (ha-584983-m02)   </features>
	I0819 12:12:52.634433 2104204 main.go:141] libmachine: (ha-584983-m02)   <cpu mode='host-passthrough'>
	I0819 12:12:52.634444 2104204 main.go:141] libmachine: (ha-584983-m02)   
	I0819 12:12:52.634452 2104204 main.go:141] libmachine: (ha-584983-m02)   </cpu>
	I0819 12:12:52.634471 2104204 main.go:141] libmachine: (ha-584983-m02)   <os>
	I0819 12:12:52.634483 2104204 main.go:141] libmachine: (ha-584983-m02)     <type>hvm</type>
	I0819 12:12:52.634491 2104204 main.go:141] libmachine: (ha-584983-m02)     <boot dev='cdrom'/>
	I0819 12:12:52.634507 2104204 main.go:141] libmachine: (ha-584983-m02)     <boot dev='hd'/>
	I0819 12:12:52.634519 2104204 main.go:141] libmachine: (ha-584983-m02)     <bootmenu enable='no'/>
	I0819 12:12:52.634528 2104204 main.go:141] libmachine: (ha-584983-m02)   </os>
	I0819 12:12:52.634536 2104204 main.go:141] libmachine: (ha-584983-m02)   <devices>
	I0819 12:12:52.634551 2104204 main.go:141] libmachine: (ha-584983-m02)     <disk type='file' device='cdrom'>
	I0819 12:12:52.634569 2104204 main.go:141] libmachine: (ha-584983-m02)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/boot2docker.iso'/>
	I0819 12:12:52.634580 2104204 main.go:141] libmachine: (ha-584983-m02)       <target dev='hdc' bus='scsi'/>
	I0819 12:12:52.634590 2104204 main.go:141] libmachine: (ha-584983-m02)       <readonly/>
	I0819 12:12:52.634599 2104204 main.go:141] libmachine: (ha-584983-m02)     </disk>
	I0819 12:12:52.634630 2104204 main.go:141] libmachine: (ha-584983-m02)     <disk type='file' device='disk'>
	I0819 12:12:52.634657 2104204 main.go:141] libmachine: (ha-584983-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 12:12:52.634675 2104204 main.go:141] libmachine: (ha-584983-m02)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/ha-584983-m02.rawdisk'/>
	I0819 12:12:52.634698 2104204 main.go:141] libmachine: (ha-584983-m02)       <target dev='hda' bus='virtio'/>
	I0819 12:12:52.634710 2104204 main.go:141] libmachine: (ha-584983-m02)     </disk>
	I0819 12:12:52.634724 2104204 main.go:141] libmachine: (ha-584983-m02)     <interface type='network'>
	I0819 12:12:52.634736 2104204 main.go:141] libmachine: (ha-584983-m02)       <source network='mk-ha-584983'/>
	I0819 12:12:52.634749 2104204 main.go:141] libmachine: (ha-584983-m02)       <model type='virtio'/>
	I0819 12:12:52.634759 2104204 main.go:141] libmachine: (ha-584983-m02)     </interface>
	I0819 12:12:52.634771 2104204 main.go:141] libmachine: (ha-584983-m02)     <interface type='network'>
	I0819 12:12:52.634781 2104204 main.go:141] libmachine: (ha-584983-m02)       <source network='default'/>
	I0819 12:12:52.634792 2104204 main.go:141] libmachine: (ha-584983-m02)       <model type='virtio'/>
	I0819 12:12:52.634806 2104204 main.go:141] libmachine: (ha-584983-m02)     </interface>
	I0819 12:12:52.634818 2104204 main.go:141] libmachine: (ha-584983-m02)     <serial type='pty'>
	I0819 12:12:52.634832 2104204 main.go:141] libmachine: (ha-584983-m02)       <target port='0'/>
	I0819 12:12:52.634843 2104204 main.go:141] libmachine: (ha-584983-m02)     </serial>
	I0819 12:12:52.634854 2104204 main.go:141] libmachine: (ha-584983-m02)     <console type='pty'>
	I0819 12:12:52.634866 2104204 main.go:141] libmachine: (ha-584983-m02)       <target type='serial' port='0'/>
	I0819 12:12:52.634876 2104204 main.go:141] libmachine: (ha-584983-m02)     </console>
	I0819 12:12:52.634887 2104204 main.go:141] libmachine: (ha-584983-m02)     <rng model='virtio'>
	I0819 12:12:52.634899 2104204 main.go:141] libmachine: (ha-584983-m02)       <backend model='random'>/dev/random</backend>
	I0819 12:12:52.634911 2104204 main.go:141] libmachine: (ha-584983-m02)     </rng>
	I0819 12:12:52.634921 2104204 main.go:141] libmachine: (ha-584983-m02)     
	I0819 12:12:52.634929 2104204 main.go:141] libmachine: (ha-584983-m02)     
	I0819 12:12:52.634938 2104204 main.go:141] libmachine: (ha-584983-m02)   </devices>
	I0819 12:12:52.634946 2104204 main.go:141] libmachine: (ha-584983-m02) </domain>
	I0819 12:12:52.634954 2104204 main.go:141] libmachine: (ha-584983-m02) 
	I0819 12:12:52.642110 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:d5:2f:89 in network default
	I0819 12:12:52.642724 2104204 main.go:141] libmachine: (ha-584983-m02) Ensuring networks are active...
	I0819 12:12:52.642741 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:52.643453 2104204 main.go:141] libmachine: (ha-584983-m02) Ensuring network default is active
	I0819 12:12:52.643785 2104204 main.go:141] libmachine: (ha-584983-m02) Ensuring network mk-ha-584983 is active
	I0819 12:12:52.644156 2104204 main.go:141] libmachine: (ha-584983-m02) Getting domain xml...
	I0819 12:12:52.644973 2104204 main.go:141] libmachine: (ha-584983-m02) Creating domain...
	I0819 12:12:53.897826 2104204 main.go:141] libmachine: (ha-584983-m02) Waiting to get IP...
	I0819 12:12:53.898857 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:53.899351 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:53.899383 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:53.899299 2104574 retry.go:31] will retry after 269.031574ms: waiting for machine to come up
	I0819 12:12:54.169941 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:54.170433 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:54.170466 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:54.170386 2104574 retry.go:31] will retry after 286.450158ms: waiting for machine to come up
	I0819 12:12:54.459019 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:54.459477 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:54.459503 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:54.459432 2104574 retry.go:31] will retry after 334.015871ms: waiting for machine to come up
	I0819 12:12:54.795032 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:54.795539 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:54.795567 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:54.795485 2104574 retry.go:31] will retry after 458.775137ms: waiting for machine to come up
	I0819 12:12:55.256213 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:55.256685 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:55.256716 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:55.256624 2104574 retry.go:31] will retry after 686.499173ms: waiting for machine to come up
	I0819 12:12:55.944647 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:55.945090 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:55.945115 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:55.945044 2104574 retry.go:31] will retry after 761.90458ms: waiting for machine to come up
	I0819 12:12:56.709095 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:56.709452 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:56.709476 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:56.709412 2104574 retry.go:31] will retry after 792.835586ms: waiting for machine to come up
	I0819 12:12:57.503750 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:57.504255 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:57.504276 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:57.504205 2104574 retry.go:31] will retry after 1.04667002s: waiting for machine to come up
	I0819 12:12:58.552082 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:58.552542 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:58.552575 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:58.552496 2104574 retry.go:31] will retry after 1.754183402s: waiting for machine to come up
	I0819 12:13:00.309257 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:00.309657 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:13:00.309688 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:13:00.309599 2104574 retry.go:31] will retry after 1.730425769s: waiting for machine to come up
	I0819 12:13:02.041517 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:02.042048 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:13:02.042077 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:13:02.041987 2104574 retry.go:31] will retry after 2.450526902s: waiting for machine to come up
	I0819 12:13:04.494260 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:04.494637 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:13:04.494665 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:13:04.494594 2104574 retry.go:31] will retry after 2.299318579s: waiting for machine to come up
	I0819 12:13:06.797309 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:06.797784 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:13:06.797814 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:13:06.797723 2104574 retry.go:31] will retry after 3.860898262s: waiting for machine to come up
	I0819 12:13:10.659927 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:10.660360 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:13:10.660384 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:13:10.660315 2104574 retry.go:31] will retry after 4.292802453s: waiting for machine to come up
	I0819 12:13:14.954373 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:14.954842 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has current primary IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:14.954865 2104204 main.go:141] libmachine: (ha-584983-m02) Found IP for machine: 192.168.39.84
	I0819 12:13:14.954879 2104204 main.go:141] libmachine: (ha-584983-m02) Reserving static IP address...
	I0819 12:13:14.955281 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find host DHCP lease matching {name: "ha-584983-m02", mac: "52:54:00:2d:f2:2d", ip: "192.168.39.84"} in network mk-ha-584983
	I0819 12:13:15.033627 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Getting to WaitForSSH function...
	I0819 12:13:15.033664 2104204 main.go:141] libmachine: (ha-584983-m02) Reserved static IP address: 192.168.39.84
	I0819 12:13:15.033679 2104204 main.go:141] libmachine: (ha-584983-m02) Waiting for SSH to be available...
	I0819 12:13:15.036210 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.036611 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.036650 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.036797 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Using SSH client type: external
	I0819 12:13:15.036822 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa (-rw-------)
	I0819 12:13:15.036849 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 12:13:15.036861 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | About to run SSH command:
	I0819 12:13:15.036878 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | exit 0
	I0819 12:13:15.159773 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | SSH cmd err, output: <nil>: 
	I0819 12:13:15.160031 2104204 main.go:141] libmachine: (ha-584983-m02) KVM machine creation complete!
	I0819 12:13:15.160377 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetConfigRaw
	I0819 12:13:15.160953 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:13:15.161202 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:13:15.161437 2104204 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 12:13:15.161460 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetState
	I0819 12:13:15.162675 2104204 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 12:13:15.162691 2104204 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 12:13:15.162699 2104204 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 12:13:15.162709 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:15.165525 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.165876 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.165913 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.166019 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:15.166206 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.166394 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.166530 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:15.166681 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:13:15.166911 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0819 12:13:15.166945 2104204 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 12:13:15.263398 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:13:15.263425 2104204 main.go:141] libmachine: Detecting the provisioner...
	I0819 12:13:15.263433 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:15.266341 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.266832 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.266862 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.267065 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:15.267278 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.267457 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.267606 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:15.267783 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:13:15.267969 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0819 12:13:15.267980 2104204 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 12:13:15.368570 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 12:13:15.368650 2104204 main.go:141] libmachine: found compatible host: buildroot
	I0819 12:13:15.368663 2104204 main.go:141] libmachine: Provisioning with buildroot...
	I0819 12:13:15.368677 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetMachineName
	I0819 12:13:15.368911 2104204 buildroot.go:166] provisioning hostname "ha-584983-m02"
	I0819 12:13:15.368939 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetMachineName
	I0819 12:13:15.369127 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:15.371769 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.372182 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.372211 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.372374 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:15.372552 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.372722 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.372851 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:15.373015 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:13:15.373213 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0819 12:13:15.373226 2104204 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-584983-m02 && echo "ha-584983-m02" | sudo tee /etc/hostname
	I0819 12:13:15.486473 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-584983-m02
	
	I0819 12:13:15.486504 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:15.489205 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.489560 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.489592 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.489766 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:15.489974 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.490161 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.490302 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:15.490458 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:13:15.490700 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0819 12:13:15.490724 2104204 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-584983-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-584983-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-584983-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:13:15.601391 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:13:15.601429 2104204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 12:13:15.601452 2104204 buildroot.go:174] setting up certificates
	I0819 12:13:15.601470 2104204 provision.go:84] configureAuth start
	I0819 12:13:15.601484 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetMachineName
	I0819 12:13:15.601783 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:13:15.604452 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.604821 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.604851 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.605013 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:15.607293 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.607599 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.607633 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.607774 2104204 provision.go:143] copyHostCerts
	I0819 12:13:15.607809 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:13:15.607851 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 12:13:15.607862 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:13:15.607957 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 12:13:15.608063 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:13:15.608099 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 12:13:15.608120 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:13:15.608185 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 12:13:15.608272 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:13:15.608295 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 12:13:15.608304 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:13:15.608342 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 12:13:15.608423 2104204 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.ha-584983-m02 san=[127.0.0.1 192.168.39.84 ha-584983-m02 localhost minikube]
	I0819 12:13:15.690822 2104204 provision.go:177] copyRemoteCerts
	I0819 12:13:15.690884 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:13:15.690918 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:15.693514 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.693827 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.693873 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.694023 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:15.694227 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.694390 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:15.694523 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	I0819 12:13:15.774217 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 12:13:15.774299 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 12:13:15.797945 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 12:13:15.798041 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 12:13:15.820941 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 12:13:15.821009 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 12:13:15.843686 2104204 provision.go:87] duration metric: took 242.200857ms to configureAuth
	I0819 12:13:15.843718 2104204 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:13:15.843978 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:13:15.844099 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:15.846609 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.846980 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.847011 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.847179 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:15.847371 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.847571 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.847759 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:15.847942 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:13:15.848188 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0819 12:13:15.848209 2104204 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:13:16.116033 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:13:16.116068 2104204 main.go:141] libmachine: Checking connection to Docker...
	I0819 12:13:16.116078 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetURL
	I0819 12:13:16.117506 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Using libvirt version 6000000
	I0819 12:13:16.119776 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.120280 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:16.120310 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.120512 2104204 main.go:141] libmachine: Docker is up and running!
	I0819 12:13:16.120530 2104204 main.go:141] libmachine: Reticulating splines...
	I0819 12:13:16.120539 2104204 client.go:171] duration metric: took 23.928733735s to LocalClient.Create
	I0819 12:13:16.120571 2104204 start.go:167] duration metric: took 23.928807758s to libmachine.API.Create "ha-584983"
	I0819 12:13:16.120582 2104204 start.go:293] postStartSetup for "ha-584983-m02" (driver="kvm2")
	I0819 12:13:16.120593 2104204 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:13:16.120614 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:13:16.120877 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:13:16.120903 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:16.123684 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.124066 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:16.124095 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.124214 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:16.124426 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:16.124611 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:16.124749 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	I0819 12:13:16.202320 2104204 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:13:16.206514 2104204 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:13:16.206547 2104204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 12:13:16.206626 2104204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 12:13:16.206722 2104204 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 12:13:16.206733 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /etc/ssl/certs/20941932.pem
	I0819 12:13:16.206815 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:13:16.216307 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:13:16.239317 2104204 start.go:296] duration metric: took 118.712174ms for postStartSetup
	I0819 12:13:16.239379 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetConfigRaw
	I0819 12:13:16.240007 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:13:16.242470 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.242695 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:16.242724 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.242925 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:13:16.243171 2104204 start.go:128] duration metric: took 24.070287155s to createHost
	I0819 12:13:16.243194 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:16.245332 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.245607 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:16.245627 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.245818 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:16.245986 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:16.246145 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:16.246293 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:16.246440 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:13:16.246645 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0819 12:13:16.246655 2104204 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:13:16.345888 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724069596.304140552
	
	I0819 12:13:16.345915 2104204 fix.go:216] guest clock: 1724069596.304140552
	I0819 12:13:16.345923 2104204 fix.go:229] Guest: 2024-08-19 12:13:16.304140552 +0000 UTC Remote: 2024-08-19 12:13:16.243182787 +0000 UTC m=+69.285548649 (delta=60.957765ms)
	I0819 12:13:16.345939 2104204 fix.go:200] guest clock delta is within tolerance: 60.957765ms
	I0819 12:13:16.345945 2104204 start.go:83] releasing machines lock for "ha-584983-m02", held for 24.173196064s
	I0819 12:13:16.345964 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:13:16.346213 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:13:16.348615 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.348973 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:16.349003 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.350946 2104204 out.go:177] * Found network options:
	I0819 12:13:16.352133 2104204 out.go:177]   - NO_PROXY=192.168.39.12
	W0819 12:13:16.353200 2104204 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 12:13:16.353240 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:13:16.353704 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:13:16.353896 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:13:16.353998 2104204 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:13:16.354037 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	W0819 12:13:16.354124 2104204 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 12:13:16.354204 2104204 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:13:16.354231 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:16.356770 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.356843 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.357194 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:16.357220 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:16.357241 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.357277 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.357452 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:16.357567 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:16.357659 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:16.357733 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:16.357793 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:16.357903 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:16.357984 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	I0819 12:13:16.358024 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	I0819 12:13:16.584992 2104204 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:13:16.591193 2104204 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:13:16.591273 2104204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:13:16.609110 2104204 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 12:13:16.609150 2104204 start.go:495] detecting cgroup driver to use...
	I0819 12:13:16.609243 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:13:16.627685 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:13:16.641909 2104204 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:13:16.641983 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:13:16.655932 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:13:16.669791 2104204 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:13:16.787142 2104204 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:13:16.928691 2104204 docker.go:233] disabling docker service ...
	I0819 12:13:16.928790 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:13:16.954085 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:13:16.967075 2104204 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:13:17.096747 2104204 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:13:17.218115 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:13:17.231283 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:13:17.248621 2104204 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:13:17.248688 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:13:17.258707 2104204 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:13:17.258773 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:13:17.268699 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:13:17.278618 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:13:17.290741 2104204 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:13:17.301434 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:13:17.311429 2104204 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:13:17.327588 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:13:17.337676 2104204 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:13:17.347153 2104204 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 12:13:17.347207 2104204 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 12:13:17.360141 2104204 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:13:17.369969 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:13:17.510522 2104204 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:13:17.650044 2104204 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:13:17.650131 2104204 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:13:17.655040 2104204 start.go:563] Will wait 60s for crictl version
	I0819 12:13:17.655123 2104204 ssh_runner.go:195] Run: which crictl
	I0819 12:13:17.658744 2104204 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:13:17.697870 2104204 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:13:17.697953 2104204 ssh_runner.go:195] Run: crio --version
	I0819 12:13:17.725755 2104204 ssh_runner.go:195] Run: crio --version
	I0819 12:13:17.753744 2104204 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:13:17.755025 2104204 out.go:177]   - env NO_PROXY=192.168.39.12
	I0819 12:13:17.756219 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:13:17.759092 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:17.759514 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:17.759546 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:17.759749 2104204 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:13:17.763774 2104204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:13:17.775835 2104204 mustload.go:65] Loading cluster: ha-584983
	I0819 12:13:17.776043 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:13:17.776347 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:13:17.776391 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:13:17.791847 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36225
	I0819 12:13:17.792302 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:13:17.792768 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:13:17.792792 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:13:17.793107 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:13:17.793299 2104204 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:13:17.794851 2104204 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:13:17.795163 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:13:17.795199 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:13:17.809895 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33755
	I0819 12:13:17.810227 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:13:17.810699 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:13:17.810721 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:13:17.811093 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:13:17.811289 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:13:17.811468 2104204 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983 for IP: 192.168.39.84
	I0819 12:13:17.811480 2104204 certs.go:194] generating shared ca certs ...
	I0819 12:13:17.811493 2104204 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:13:17.811628 2104204 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 12:13:17.811667 2104204 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 12:13:17.811676 2104204 certs.go:256] generating profile certs ...
	I0819 12:13:17.811748 2104204 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key
	I0819 12:13:17.811772 2104204 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.6213bf22
	I0819 12:13:17.811789 2104204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.6213bf22 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.84 192.168.39.254]
	I0819 12:13:17.912093 2104204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.6213bf22 ...
	I0819 12:13:17.912134 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.6213bf22: {Name:mkf7c7e48974c30f09a51e784b35b6e29fbaf05b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:13:17.912352 2104204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.6213bf22 ...
	I0819 12:13:17.912367 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.6213bf22: {Name:mkbef6913de8786ee979fef7540740082bd9337a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:13:17.912454 2104204 certs.go:381] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.6213bf22 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt
	I0819 12:13:17.912603 2104204 certs.go:385] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.6213bf22 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key
	I0819 12:13:17.912746 2104204 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key
	I0819 12:13:17.912765 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 12:13:17.912781 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 12:13:17.912797 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 12:13:17.912812 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 12:13:17.912827 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 12:13:17.912841 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 12:13:17.912856 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 12:13:17.912870 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 12:13:17.912925 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 12:13:17.912957 2104204 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 12:13:17.912972 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:13:17.913001 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 12:13:17.913027 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:13:17.913055 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 12:13:17.913103 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:13:17.913138 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /usr/share/ca-certificates/20941932.pem
	I0819 12:13:17.913155 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:13:17.913167 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem -> /usr/share/ca-certificates/2094193.pem
	I0819 12:13:17.913222 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:13:17.916483 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:13:17.916872 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:13:17.916903 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:13:17.917087 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:13:17.917286 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:13:17.917455 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:13:17.917584 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:13:17.992523 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 12:13:17.997745 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 12:13:18.010072 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 12:13:18.014325 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 12:13:18.026626 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 12:13:18.031010 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 12:13:18.042543 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 12:13:18.046696 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0819 12:13:18.060239 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 12:13:18.064495 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 12:13:18.076366 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 12:13:18.080628 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0819 12:13:18.093027 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:13:18.121005 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:13:18.147183 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:13:18.170001 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 12:13:18.192130 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 12:13:18.214072 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 12:13:18.236341 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:13:18.258614 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:13:18.281364 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 12:13:18.304981 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:13:18.327846 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 12:13:18.351162 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 12:13:18.366683 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 12:13:18.382270 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 12:13:18.397604 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0819 12:13:18.413131 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 12:13:18.429199 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0819 12:13:18.447642 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 12:13:18.464137 2104204 ssh_runner.go:195] Run: openssl version
	I0819 12:13:18.469445 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 12:13:18.479674 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 12:13:18.484696 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 12:13:18.484746 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 12:13:18.492272 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:13:18.502754 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:13:18.513796 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:13:18.518268 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:13:18.518319 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:13:18.523951 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:13:18.534591 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 12:13:18.545352 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 12:13:18.549668 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 12:13:18.549715 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 12:13:18.555221 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 12:13:18.565441 2104204 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:13:18.569203 2104204 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 12:13:18.569270 2104204 kubeadm.go:934] updating node {m02 192.168.39.84 8443 v1.31.0 crio true true} ...
	I0819 12:13:18.569372 2104204 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-584983-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:13:18.569403 2104204 kube-vip.go:115] generating kube-vip config ...
	I0819 12:13:18.569442 2104204 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 12:13:18.583816 2104204 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 12:13:18.583882 2104204 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 12:13:18.583939 2104204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:13:18.596181 2104204 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 12:13:18.596244 2104204 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 12:13:18.605951 2104204 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 12:13:18.605980 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 12:13:18.606032 2104204 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 12:13:18.606062 2104204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 12:13:18.606065 2104204 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 12:13:18.610424 2104204 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 12:13:18.610452 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 12:13:19.217885 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 12:13:19.217977 2104204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 12:13:19.223091 2104204 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 12:13:19.223126 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 12:13:19.302418 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:13:19.339074 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 12:13:19.339199 2104204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 12:13:19.352911 2104204 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 12:13:19.352965 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 12:13:19.787052 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 12:13:19.796620 2104204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0819 12:13:19.812768 2104204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:13:19.828987 2104204 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 12:13:19.845609 2104204 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 12:13:19.849818 2104204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:13:19.861451 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:13:19.983011 2104204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:13:19.998915 2104204 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:13:19.999439 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:13:19.999496 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:13:20.016891 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0819 12:13:20.017359 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:13:20.017948 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:13:20.017975 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:13:20.018341 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:13:20.018586 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:13:20.018745 2104204 start.go:317] joinCluster: &{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:13:20.018853 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 12:13:20.018879 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:13:20.022083 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:13:20.022566 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:13:20.022596 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:13:20.022799 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:13:20.022987 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:13:20.023162 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:13:20.023336 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:13:20.166884 2104204 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:13:20.166936 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7rwmb2.eenai9a2que74cw8 --discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-584983-m02 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443"
	I0819 12:13:41.434545 2104204 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7rwmb2.eenai9a2que74cw8 --discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-584983-m02 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443": (21.267577558s)
	I0819 12:13:41.434592 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 12:13:42.093285 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-584983-m02 minikube.k8s.io/updated_at=2024_08_19T12_13_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=ha-584983 minikube.k8s.io/primary=false
	I0819 12:13:42.220239 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-584983-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 12:13:42.334084 2104204 start.go:319] duration metric: took 22.315331348s to joinCluster
	I0819 12:13:42.334202 2104204 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:13:42.334556 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:13:42.335698 2104204 out.go:177] * Verifying Kubernetes components...
	I0819 12:13:42.336881 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:13:42.604776 2104204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:13:42.660839 2104204 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:13:42.661134 2104204 kapi.go:59] client config for ha-584983: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.crt", KeyFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key", CAFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 12:13:42.661210 2104204 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.12:8443
	I0819 12:13:42.661466 2104204 node_ready.go:35] waiting up to 6m0s for node "ha-584983-m02" to be "Ready" ...
	I0819 12:13:42.661578 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:42.661589 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:42.661600 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:42.661608 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:42.712677 2104204 round_trippers.go:574] Response Status: 200 OK in 51 milliseconds
	I0819 12:13:43.161749 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:43.161775 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:43.161784 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:43.161788 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:43.175091 2104204 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0819 12:13:43.661811 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:43.661838 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:43.661850 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:43.661856 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:43.665180 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:44.161889 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:44.161919 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:44.161931 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:44.161936 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:44.165148 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:44.661957 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:44.661980 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:44.661990 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:44.661995 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:44.665188 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:44.665862 2104204 node_ready.go:53] node "ha-584983-m02" has status "Ready":"False"
	I0819 12:13:45.161929 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:45.161956 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:45.161967 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:45.161972 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:45.165189 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:45.662183 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:45.662216 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:45.662235 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:45.662241 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:45.665239 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:46.162091 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:46.162119 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:46.162128 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:46.162136 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:46.165208 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:46.662278 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:46.662307 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:46.662317 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:46.662323 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:46.665246 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:47.162131 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:47.162156 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:47.162165 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:47.162168 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:47.165302 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:47.166088 2104204 node_ready.go:53] node "ha-584983-m02" has status "Ready":"False"
	I0819 12:13:47.661922 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:47.661947 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:47.661957 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:47.661962 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:47.668230 2104204 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 12:13:48.162228 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:48.162256 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:48.162274 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:48.162291 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:48.165734 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:48.662622 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:48.662648 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:48.662655 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:48.662659 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:48.681038 2104204 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0819 12:13:49.161784 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:49.161810 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:49.161819 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:49.161824 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:49.164791 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:49.661880 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:49.661906 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:49.661915 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:49.661919 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:49.664936 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:49.665639 2104204 node_ready.go:53] node "ha-584983-m02" has status "Ready":"False"
	I0819 12:13:50.162389 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:50.162415 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:50.162423 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:50.162427 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:50.165446 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:50.661644 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:50.661670 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:50.661679 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:50.661684 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:50.664654 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:51.161667 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:51.161692 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:51.161700 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:51.161705 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:51.164747 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:51.661927 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:51.661953 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:51.661966 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:51.661971 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:51.665238 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:51.665748 2104204 node_ready.go:53] node "ha-584983-m02" has status "Ready":"False"
	I0819 12:13:52.162544 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:52.162574 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:52.162583 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:52.162587 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:52.166161 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:52.662677 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:52.662708 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:52.662718 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:52.662723 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:52.666365 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:53.162253 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:53.162277 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:53.162286 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:53.162291 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:53.165770 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:53.661871 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:53.661897 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:53.661905 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:53.661909 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:53.665448 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:53.665984 2104204 node_ready.go:53] node "ha-584983-m02" has status "Ready":"False"
	I0819 12:13:54.162481 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:54.162506 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:54.162520 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:54.162524 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:54.165761 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:54.661797 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:54.661826 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:54.661835 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:54.661840 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:54.664870 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:55.162164 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:55.162192 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:55.162204 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:55.162210 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:55.167627 2104204 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 12:13:55.661768 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:55.661798 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:55.661817 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:55.661822 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:55.665349 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:56.161811 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:56.161839 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.161848 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.161854 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.165879 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:13:56.167315 2104204 node_ready.go:53] node "ha-584983-m02" has status "Ready":"False"
	I0819 12:13:56.661653 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:56.661693 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.661703 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.661706 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.664744 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:56.665583 2104204 node_ready.go:49] node "ha-584983-m02" has status "Ready":"True"
	I0819 12:13:56.665604 2104204 node_ready.go:38] duration metric: took 14.004121231s for node "ha-584983-m02" to be "Ready" ...
	I0819 12:13:56.665614 2104204 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:13:56.665679 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:13:56.665688 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.665695 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.665699 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.669994 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:13:56.679022 2104204 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jgknw" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.679109 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-jgknw
	I0819 12:13:56.679119 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.679126 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.679130 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.681549 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:56.682126 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:56.682144 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.682151 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.682155 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.684476 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:56.685291 2104204 pod_ready.go:93] pod "coredns-6f6b679f8f-jgknw" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:56.685317 2104204 pod_ready.go:82] duration metric: took 6.264511ms for pod "coredns-6f6b679f8f-jgknw" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.685331 2104204 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wll5k" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.685393 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-wll5k
	I0819 12:13:56.685403 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.685414 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.685424 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.688058 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:56.688729 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:56.688742 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.688749 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.688754 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.690780 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:56.691097 2104204 pod_ready.go:93] pod "coredns-6f6b679f8f-wll5k" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:56.691112 2104204 pod_ready.go:82] duration metric: took 5.771616ms for pod "coredns-6f6b679f8f-wll5k" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.691120 2104204 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.691162 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-584983
	I0819 12:13:56.691169 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.691176 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.691182 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.693576 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:56.694036 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:56.694051 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.694057 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.694063 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.695969 2104204 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 12:13:56.696468 2104204 pod_ready.go:93] pod "etcd-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:56.696483 2104204 pod_ready.go:82] duration metric: took 5.357926ms for pod "etcd-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.696491 2104204 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.696531 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-584983-m02
	I0819 12:13:56.696538 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.696544 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.696550 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.698626 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:56.699057 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:56.699071 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.699077 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.699081 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.701012 2104204 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 12:13:56.701458 2104204 pod_ready.go:93] pod "etcd-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:56.701474 2104204 pod_ready.go:82] duration metric: took 4.976336ms for pod "etcd-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.701489 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.861811 2104204 request.go:632] Waited for 160.243017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983
	I0819 12:13:56.861880 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983
	I0819 12:13:56.861886 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.861894 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.861899 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.865173 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:57.062160 2104204 request.go:632] Waited for 196.228635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:57.062259 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:57.062271 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:57.062284 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:57.062296 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:57.065078 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:57.065629 2104204 pod_ready.go:93] pod "kube-apiserver-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:57.065652 2104204 pod_ready.go:82] duration metric: took 364.155693ms for pod "kube-apiserver-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:57.065667 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:57.262578 2104204 request.go:632] Waited for 196.810462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983-m02
	I0819 12:13:57.262649 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983-m02
	I0819 12:13:57.262655 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:57.262662 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:57.262671 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:57.265829 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:57.461778 2104204 request.go:632] Waited for 195.295532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:57.461878 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:57.461888 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:57.461901 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:57.461909 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:57.464889 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:57.465385 2104204 pod_ready.go:93] pod "kube-apiserver-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:57.465407 2104204 pod_ready.go:82] duration metric: took 399.732154ms for pod "kube-apiserver-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:57.465417 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:57.662439 2104204 request.go:632] Waited for 196.945806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983
	I0819 12:13:57.662556 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983
	I0819 12:13:57.662566 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:57.662579 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:57.662588 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:57.666215 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:57.861840 2104204 request.go:632] Waited for 194.771805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:57.861901 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:57.861906 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:57.861913 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:57.861919 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:57.865006 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:57.865691 2104204 pod_ready.go:93] pod "kube-controller-manager-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:57.865709 2104204 pod_ready.go:82] duration metric: took 400.285505ms for pod "kube-controller-manager-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:57.865719 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:58.061721 2104204 request.go:632] Waited for 195.8977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983-m02
	I0819 12:13:58.061795 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983-m02
	I0819 12:13:58.061801 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:58.061809 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:58.061813 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:58.064976 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:58.262238 2104204 request.go:632] Waited for 196.385787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:58.262318 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:58.262324 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:58.262332 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:58.262336 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:58.265171 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:58.265889 2104204 pod_ready.go:93] pod "kube-controller-manager-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:58.265913 2104204 pod_ready.go:82] duration metric: took 400.18757ms for pod "kube-controller-manager-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:58.265924 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cwlpd" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:58.461971 2104204 request.go:632] Waited for 195.950791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwlpd
	I0819 12:13:58.462045 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwlpd
	I0819 12:13:58.462053 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:58.462065 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:58.462072 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:58.465393 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:58.662437 2104204 request.go:632] Waited for 196.378761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:58.662500 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:58.662505 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:58.662513 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:58.662517 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:58.666609 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:13:58.667210 2104204 pod_ready.go:93] pod "kube-proxy-cwlpd" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:58.667237 2104204 pod_ready.go:82] duration metric: took 401.302617ms for pod "kube-proxy-cwlpd" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:58.667250 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w7x44" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:58.862220 2104204 request.go:632] Waited for 194.855301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7x44
	I0819 12:13:58.862313 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7x44
	I0819 12:13:58.862327 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:58.862343 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:58.862353 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:58.866569 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:13:59.062704 2104204 request.go:632] Waited for 195.356423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:59.062802 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:59.062808 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:59.062818 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:59.062826 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:59.065488 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:59.065978 2104204 pod_ready.go:93] pod "kube-proxy-w7x44" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:59.065999 2104204 pod_ready.go:82] duration metric: took 398.739905ms for pod "kube-proxy-w7x44" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:59.066011 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:59.262206 2104204 request.go:632] Waited for 196.11142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983
	I0819 12:13:59.262294 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983
	I0819 12:13:59.262301 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:59.262315 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:59.262319 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:59.265474 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:59.462668 2104204 request.go:632] Waited for 196.402992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:59.462730 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:59.462735 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:59.462743 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:59.462748 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:59.465782 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:59.466426 2104204 pod_ready.go:93] pod "kube-scheduler-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:59.466447 2104204 pod_ready.go:82] duration metric: took 400.429345ms for pod "kube-scheduler-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:59.466460 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:59.662387 2104204 request.go:632] Waited for 195.850288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983-m02
	I0819 12:13:59.662466 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983-m02
	I0819 12:13:59.662478 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:59.662488 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:59.662495 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:59.666763 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:13:59.861671 2104204 request.go:632] Waited for 194.314758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:59.861749 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:59.861756 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:59.861767 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:59.861780 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:59.864974 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:59.865590 2104204 pod_ready.go:93] pod "kube-scheduler-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:59.865613 2104204 pod_ready.go:82] duration metric: took 399.14712ms for pod "kube-scheduler-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:59.865625 2104204 pod_ready.go:39] duration metric: took 3.199999525s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:13:59.865661 2104204 api_server.go:52] waiting for apiserver process to appear ...
	I0819 12:13:59.865721 2104204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:13:59.882272 2104204 api_server.go:72] duration metric: took 17.548020274s to wait for apiserver process to appear ...
	I0819 12:13:59.882296 2104204 api_server.go:88] waiting for apiserver healthz status ...
	I0819 12:13:59.882317 2104204 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0819 12:13:59.886822 2104204 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
	ok
	I0819 12:13:59.886886 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/version
	I0819 12:13:59.886893 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:59.886901 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:59.886908 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:59.887696 2104204 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 12:13:59.887832 2104204 api_server.go:141] control plane version: v1.31.0
	I0819 12:13:59.887857 2104204 api_server.go:131] duration metric: took 5.552468ms to wait for apiserver health ...
	I0819 12:13:59.887867 2104204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 12:14:00.062453 2104204 request.go:632] Waited for 174.490561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:14:00.062551 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:14:00.062564 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:00.062577 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:00.062588 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:00.067794 2104204 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 12:14:00.072140 2104204 system_pods.go:59] 17 kube-system pods found
	I0819 12:14:00.072246 2104204 system_pods.go:61] "coredns-6f6b679f8f-jgknw" [7f0e1948-9d0c-49fd-aff6-d23c2d691e7d] Running
	I0819 12:14:00.072261 2104204 system_pods.go:61] "coredns-6f6b679f8f-wll5k" [8383b1b6-e917-411e-834f-13db39309ec2] Running
	I0819 12:14:00.072273 2104204 system_pods.go:61] "etcd-ha-584983" [32edd388-69fe-49c6-8d9b-f1fd7bbcd7b5] Running
	I0819 12:14:00.072280 2104204 system_pods.go:61] "etcd-ha-584983-m02" [eee1abee-b052-42b7-9401-623d23097668] Running
	I0819 12:14:00.072287 2104204 system_pods.go:61] "kindnet-dg8d6" [92ae558c-b6ea-4900-b534-ded9e58c1e96] Running
	I0819 12:14:00.072294 2104204 system_pods.go:61] "kindnet-pb989" [3bc8d323-6389-4720-b63a-63aa2717c2bd] Running
	I0819 12:14:00.072300 2104204 system_pods.go:61] "kube-apiserver-ha-584983" [5a547a01-8a0e-48f9-b1a5-b2337e3c79a2] Running
	I0819 12:14:00.072310 2104204 system_pods.go:61] "kube-apiserver-ha-584983-m02" [375be956-72f0-4e64-aa5e-572472963021] Running
	I0819 12:14:00.072317 2104204 system_pods.go:61] "kube-controller-manager-ha-584983" [b0c9e96e-637a-4011-b983-8ab78dbed853] Running
	I0819 12:14:00.072326 2104204 system_pods.go:61] "kube-controller-manager-ha-584983-m02" [e02c4c61-cbb5-456c-92f3-50e1988a0569] Running
	I0819 12:14:00.072333 2104204 system_pods.go:61] "kube-proxy-cwlpd" [6c237025-3826-4cb3-bd67-6600c73e2153] Running
	I0819 12:14:00.072349 2104204 system_pods.go:61] "kube-proxy-w7x44" [cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5] Running
	I0819 12:14:00.072358 2104204 system_pods.go:61] "kube-scheduler-ha-584983" [c4350173-0322-4268-b40c-ab76f38b2dc9] Running
	I0819 12:14:00.072365 2104204 system_pods.go:61] "kube-scheduler-ha-584983-m02" [c3d5440f-d2bb-4f22-8b62-86765206ec07] Running
	I0819 12:14:00.072374 2104204 system_pods.go:61] "kube-vip-ha-584983" [ed94966c-ea7e-4d24-88c3-2a5106f418d4] Running
	I0819 12:14:00.072380 2104204 system_pods.go:61] "kube-vip-ha-584983-m02" [2a456354-eb5d-45d1-89b3-911050797334] Running
	I0819 12:14:00.072386 2104204 system_pods.go:61] "storage-provisioner" [8b48e339-40f9-4517-b17d-1290ded5456e] Running
	I0819 12:14:00.072398 2104204 system_pods.go:74] duration metric: took 184.520222ms to wait for pod list to return data ...
	I0819 12:14:00.072413 2104204 default_sa.go:34] waiting for default service account to be created ...
	I0819 12:14:00.261689 2104204 request.go:632] Waited for 189.153459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0819 12:14:00.261779 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0819 12:14:00.261788 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:00.261797 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:00.261804 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:00.265763 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:00.266057 2104204 default_sa.go:45] found service account: "default"
	I0819 12:14:00.266082 2104204 default_sa.go:55] duration metric: took 193.659183ms for default service account to be created ...
	I0819 12:14:00.266095 2104204 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 12:14:00.462622 2104204 request.go:632] Waited for 196.406355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:14:00.462701 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:14:00.462710 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:00.462722 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:00.462731 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:00.467122 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:14:00.472750 2104204 system_pods.go:86] 17 kube-system pods found
	I0819 12:14:00.472780 2104204 system_pods.go:89] "coredns-6f6b679f8f-jgknw" [7f0e1948-9d0c-49fd-aff6-d23c2d691e7d] Running
	I0819 12:14:00.472790 2104204 system_pods.go:89] "coredns-6f6b679f8f-wll5k" [8383b1b6-e917-411e-834f-13db39309ec2] Running
	I0819 12:14:00.472796 2104204 system_pods.go:89] "etcd-ha-584983" [32edd388-69fe-49c6-8d9b-f1fd7bbcd7b5] Running
	I0819 12:14:00.472803 2104204 system_pods.go:89] "etcd-ha-584983-m02" [eee1abee-b052-42b7-9401-623d23097668] Running
	I0819 12:14:00.472809 2104204 system_pods.go:89] "kindnet-dg8d6" [92ae558c-b6ea-4900-b534-ded9e58c1e96] Running
	I0819 12:14:00.472816 2104204 system_pods.go:89] "kindnet-pb989" [3bc8d323-6389-4720-b63a-63aa2717c2bd] Running
	I0819 12:14:00.472826 2104204 system_pods.go:89] "kube-apiserver-ha-584983" [5a547a01-8a0e-48f9-b1a5-b2337e3c79a2] Running
	I0819 12:14:00.472835 2104204 system_pods.go:89] "kube-apiserver-ha-584983-m02" [375be956-72f0-4e64-aa5e-572472963021] Running
	I0819 12:14:00.472842 2104204 system_pods.go:89] "kube-controller-manager-ha-584983" [b0c9e96e-637a-4011-b983-8ab78dbed853] Running
	I0819 12:14:00.472851 2104204 system_pods.go:89] "kube-controller-manager-ha-584983-m02" [e02c4c61-cbb5-456c-92f3-50e1988a0569] Running
	I0819 12:14:00.472857 2104204 system_pods.go:89] "kube-proxy-cwlpd" [6c237025-3826-4cb3-bd67-6600c73e2153] Running
	I0819 12:14:00.472866 2104204 system_pods.go:89] "kube-proxy-w7x44" [cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5] Running
	I0819 12:14:00.472872 2104204 system_pods.go:89] "kube-scheduler-ha-584983" [c4350173-0322-4268-b40c-ab76f38b2dc9] Running
	I0819 12:14:00.472880 2104204 system_pods.go:89] "kube-scheduler-ha-584983-m02" [c3d5440f-d2bb-4f22-8b62-86765206ec07] Running
	I0819 12:14:00.472886 2104204 system_pods.go:89] "kube-vip-ha-584983" [ed94966c-ea7e-4d24-88c3-2a5106f418d4] Running
	I0819 12:14:00.472893 2104204 system_pods.go:89] "kube-vip-ha-584983-m02" [2a456354-eb5d-45d1-89b3-911050797334] Running
	I0819 12:14:00.472898 2104204 system_pods.go:89] "storage-provisioner" [8b48e339-40f9-4517-b17d-1290ded5456e] Running
	I0819 12:14:00.472912 2104204 system_pods.go:126] duration metric: took 206.804625ms to wait for k8s-apps to be running ...
	I0819 12:14:00.472925 2104204 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 12:14:00.472982 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:14:00.488525 2104204 system_svc.go:56] duration metric: took 15.583309ms WaitForService to wait for kubelet
	I0819 12:14:00.488559 2104204 kubeadm.go:582] duration metric: took 18.154308907s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:14:00.488580 2104204 node_conditions.go:102] verifying NodePressure condition ...
	I0819 12:14:00.661911 2104204 request.go:632] Waited for 173.219136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes
	I0819 12:14:00.661975 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes
	I0819 12:14:00.661982 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:00.661990 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:00.661994 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:00.665644 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:00.666545 2104204 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 12:14:00.666575 2104204 node_conditions.go:123] node cpu capacity is 2
	I0819 12:14:00.666591 2104204 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 12:14:00.666597 2104204 node_conditions.go:123] node cpu capacity is 2
	I0819 12:14:00.666605 2104204 node_conditions.go:105] duration metric: took 178.019671ms to run NodePressure ...
	I0819 12:14:00.666620 2104204 start.go:241] waiting for startup goroutines ...
	I0819 12:14:00.666657 2104204 start.go:255] writing updated cluster config ...
	I0819 12:14:00.668733 2104204 out.go:201] 
	I0819 12:14:00.669978 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:14:00.670073 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:14:00.671581 2104204 out.go:177] * Starting "ha-584983-m03" control-plane node in "ha-584983" cluster
	I0819 12:14:00.672629 2104204 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:14:00.672649 2104204 cache.go:56] Caching tarball of preloaded images
	I0819 12:14:00.672751 2104204 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:14:00.672762 2104204 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 12:14:00.672841 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:14:00.672996 2104204 start.go:360] acquireMachinesLock for ha-584983-m03: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:14:00.673038 2104204 start.go:364] duration metric: took 23.224µs to acquireMachinesLock for "ha-584983-m03"
	I0819 12:14:00.673059 2104204 start.go:93] Provisioning new machine with config: &{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:14:00.673148 2104204 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0819 12:14:00.674437 2104204 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 12:14:00.674518 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:14:00.674550 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:14:00.691854 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I0819 12:14:00.692336 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:14:00.692868 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:14:00.692891 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:14:00.693227 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:14:00.693428 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetMachineName
	I0819 12:14:00.693593 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:00.693773 2104204 start.go:159] libmachine.API.Create for "ha-584983" (driver="kvm2")
	I0819 12:14:00.693801 2104204 client.go:168] LocalClient.Create starting
	I0819 12:14:00.693833 2104204 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem
	I0819 12:14:00.693866 2104204 main.go:141] libmachine: Decoding PEM data...
	I0819 12:14:00.693882 2104204 main.go:141] libmachine: Parsing certificate...
	I0819 12:14:00.693934 2104204 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem
	I0819 12:14:00.693954 2104204 main.go:141] libmachine: Decoding PEM data...
	I0819 12:14:00.693965 2104204 main.go:141] libmachine: Parsing certificate...
	I0819 12:14:00.693980 2104204 main.go:141] libmachine: Running pre-create checks...
	I0819 12:14:00.693988 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .PreCreateCheck
	I0819 12:14:00.694155 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetConfigRaw
	I0819 12:14:00.694531 2104204 main.go:141] libmachine: Creating machine...
	I0819 12:14:00.694547 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .Create
	I0819 12:14:00.694689 2104204 main.go:141] libmachine: (ha-584983-m03) Creating KVM machine...
	I0819 12:14:00.695956 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found existing default KVM network
	I0819 12:14:00.696108 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found existing private KVM network mk-ha-584983
	I0819 12:14:00.696276 2104204 main.go:141] libmachine: (ha-584983-m03) Setting up store path in /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03 ...
	I0819 12:14:00.696302 2104204 main.go:141] libmachine: (ha-584983-m03) Building disk image from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 12:14:00.696363 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:00.696252 2104949 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:14:00.696461 2104204 main.go:141] libmachine: (ha-584983-m03) Downloading /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 12:14:00.999229 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:00.999099 2104949 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa...
	I0819 12:14:01.120569 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:01.120423 2104949 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/ha-584983-m03.rawdisk...
	I0819 12:14:01.120607 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Writing magic tar header
	I0819 12:14:01.120622 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Writing SSH key tar header
	I0819 12:14:01.120638 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:01.120552 2104949 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03 ...
	I0819 12:14:01.120655 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03
	I0819 12:14:01.120710 2104204 main.go:141] libmachine: (ha-584983-m03) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03 (perms=drwx------)
	I0819 12:14:01.120735 2104204 main.go:141] libmachine: (ha-584983-m03) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines (perms=drwxr-xr-x)
	I0819 12:14:01.120743 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines
	I0819 12:14:01.120757 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:14:01.120764 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935
	I0819 12:14:01.120773 2104204 main.go:141] libmachine: (ha-584983-m03) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube (perms=drwxr-xr-x)
	I0819 12:14:01.120782 2104204 main.go:141] libmachine: (ha-584983-m03) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935 (perms=drwxrwxr-x)
	I0819 12:14:01.120790 2104204 main.go:141] libmachine: (ha-584983-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 12:14:01.120798 2104204 main.go:141] libmachine: (ha-584983-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 12:14:01.120806 2104204 main.go:141] libmachine: (ha-584983-m03) Creating domain...
	I0819 12:14:01.120841 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 12:14:01.120869 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Checking permissions on dir: /home/jenkins
	I0819 12:14:01.120887 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Checking permissions on dir: /home
	I0819 12:14:01.120900 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Skipping /home - not owner
	I0819 12:14:01.121989 2104204 main.go:141] libmachine: (ha-584983-m03) define libvirt domain using xml: 
	I0819 12:14:01.122014 2104204 main.go:141] libmachine: (ha-584983-m03) <domain type='kvm'>
	I0819 12:14:01.122022 2104204 main.go:141] libmachine: (ha-584983-m03)   <name>ha-584983-m03</name>
	I0819 12:14:01.122031 2104204 main.go:141] libmachine: (ha-584983-m03)   <memory unit='MiB'>2200</memory>
	I0819 12:14:01.122036 2104204 main.go:141] libmachine: (ha-584983-m03)   <vcpu>2</vcpu>
	I0819 12:14:01.122041 2104204 main.go:141] libmachine: (ha-584983-m03)   <features>
	I0819 12:14:01.122046 2104204 main.go:141] libmachine: (ha-584983-m03)     <acpi/>
	I0819 12:14:01.122053 2104204 main.go:141] libmachine: (ha-584983-m03)     <apic/>
	I0819 12:14:01.122086 2104204 main.go:141] libmachine: (ha-584983-m03)     <pae/>
	I0819 12:14:01.122108 2104204 main.go:141] libmachine: (ha-584983-m03)     
	I0819 12:14:01.122116 2104204 main.go:141] libmachine: (ha-584983-m03)   </features>
	I0819 12:14:01.122124 2104204 main.go:141] libmachine: (ha-584983-m03)   <cpu mode='host-passthrough'>
	I0819 12:14:01.122129 2104204 main.go:141] libmachine: (ha-584983-m03)   
	I0819 12:14:01.122146 2104204 main.go:141] libmachine: (ha-584983-m03)   </cpu>
	I0819 12:14:01.122159 2104204 main.go:141] libmachine: (ha-584983-m03)   <os>
	I0819 12:14:01.122170 2104204 main.go:141] libmachine: (ha-584983-m03)     <type>hvm</type>
	I0819 12:14:01.122184 2104204 main.go:141] libmachine: (ha-584983-m03)     <boot dev='cdrom'/>
	I0819 12:14:01.122192 2104204 main.go:141] libmachine: (ha-584983-m03)     <boot dev='hd'/>
	I0819 12:14:01.122198 2104204 main.go:141] libmachine: (ha-584983-m03)     <bootmenu enable='no'/>
	I0819 12:14:01.122204 2104204 main.go:141] libmachine: (ha-584983-m03)   </os>
	I0819 12:14:01.122218 2104204 main.go:141] libmachine: (ha-584983-m03)   <devices>
	I0819 12:14:01.122228 2104204 main.go:141] libmachine: (ha-584983-m03)     <disk type='file' device='cdrom'>
	I0819 12:14:01.122237 2104204 main.go:141] libmachine: (ha-584983-m03)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/boot2docker.iso'/>
	I0819 12:14:01.122244 2104204 main.go:141] libmachine: (ha-584983-m03)       <target dev='hdc' bus='scsi'/>
	I0819 12:14:01.122250 2104204 main.go:141] libmachine: (ha-584983-m03)       <readonly/>
	I0819 12:14:01.122257 2104204 main.go:141] libmachine: (ha-584983-m03)     </disk>
	I0819 12:14:01.122263 2104204 main.go:141] libmachine: (ha-584983-m03)     <disk type='file' device='disk'>
	I0819 12:14:01.122271 2104204 main.go:141] libmachine: (ha-584983-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 12:14:01.122279 2104204 main.go:141] libmachine: (ha-584983-m03)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/ha-584983-m03.rawdisk'/>
	I0819 12:14:01.122290 2104204 main.go:141] libmachine: (ha-584983-m03)       <target dev='hda' bus='virtio'/>
	I0819 12:14:01.122298 2104204 main.go:141] libmachine: (ha-584983-m03)     </disk>
	I0819 12:14:01.122306 2104204 main.go:141] libmachine: (ha-584983-m03)     <interface type='network'>
	I0819 12:14:01.122339 2104204 main.go:141] libmachine: (ha-584983-m03)       <source network='mk-ha-584983'/>
	I0819 12:14:01.122363 2104204 main.go:141] libmachine: (ha-584983-m03)       <model type='virtio'/>
	I0819 12:14:01.122376 2104204 main.go:141] libmachine: (ha-584983-m03)     </interface>
	I0819 12:14:01.122391 2104204 main.go:141] libmachine: (ha-584983-m03)     <interface type='network'>
	I0819 12:14:01.122404 2104204 main.go:141] libmachine: (ha-584983-m03)       <source network='default'/>
	I0819 12:14:01.122415 2104204 main.go:141] libmachine: (ha-584983-m03)       <model type='virtio'/>
	I0819 12:14:01.122426 2104204 main.go:141] libmachine: (ha-584983-m03)     </interface>
	I0819 12:14:01.122433 2104204 main.go:141] libmachine: (ha-584983-m03)     <serial type='pty'>
	I0819 12:14:01.122446 2104204 main.go:141] libmachine: (ha-584983-m03)       <target port='0'/>
	I0819 12:14:01.122462 2104204 main.go:141] libmachine: (ha-584983-m03)     </serial>
	I0819 12:14:01.122472 2104204 main.go:141] libmachine: (ha-584983-m03)     <console type='pty'>
	I0819 12:14:01.122482 2104204 main.go:141] libmachine: (ha-584983-m03)       <target type='serial' port='0'/>
	I0819 12:14:01.122493 2104204 main.go:141] libmachine: (ha-584983-m03)     </console>
	I0819 12:14:01.122504 2104204 main.go:141] libmachine: (ha-584983-m03)     <rng model='virtio'>
	I0819 12:14:01.122518 2104204 main.go:141] libmachine: (ha-584983-m03)       <backend model='random'>/dev/random</backend>
	I0819 12:14:01.122537 2104204 main.go:141] libmachine: (ha-584983-m03)     </rng>
	I0819 12:14:01.122549 2104204 main.go:141] libmachine: (ha-584983-m03)     
	I0819 12:14:01.122554 2104204 main.go:141] libmachine: (ha-584983-m03)     
	I0819 12:14:01.122562 2104204 main.go:141] libmachine: (ha-584983-m03)   </devices>
	I0819 12:14:01.122568 2104204 main.go:141] libmachine: (ha-584983-m03) </domain>
	I0819 12:14:01.122579 2104204 main.go:141] libmachine: (ha-584983-m03) 
	I0819 12:14:01.129716 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:cf:3e:00 in network default
	I0819 12:14:01.130330 2104204 main.go:141] libmachine: (ha-584983-m03) Ensuring networks are active...
	I0819 12:14:01.130344 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:01.131189 2104204 main.go:141] libmachine: (ha-584983-m03) Ensuring network default is active
	I0819 12:14:01.131676 2104204 main.go:141] libmachine: (ha-584983-m03) Ensuring network mk-ha-584983 is active
	I0819 12:14:01.132029 2104204 main.go:141] libmachine: (ha-584983-m03) Getting domain xml...
	I0819 12:14:01.132846 2104204 main.go:141] libmachine: (ha-584983-m03) Creating domain...
	I0819 12:14:02.350986 2104204 main.go:141] libmachine: (ha-584983-m03) Waiting to get IP...
	I0819 12:14:02.351754 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:02.352177 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:02.352206 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:02.352146 2104949 retry.go:31] will retry after 254.721771ms: waiting for machine to come up
	I0819 12:14:02.608737 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:02.609367 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:02.609399 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:02.609289 2104949 retry.go:31] will retry after 312.724144ms: waiting for machine to come up
	I0819 12:14:02.923936 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:02.924539 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:02.924562 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:02.924471 2104949 retry.go:31] will retry after 460.616765ms: waiting for machine to come up
	I0819 12:14:03.387241 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:03.387739 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:03.387771 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:03.387682 2104949 retry.go:31] will retry after 600.970301ms: waiting for machine to come up
	I0819 12:14:03.990382 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:03.990754 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:03.990804 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:03.990704 2104949 retry.go:31] will retry after 566.877282ms: waiting for machine to come up
	I0819 12:14:04.559624 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:04.560109 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:04.560137 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:04.560045 2104949 retry.go:31] will retry after 941.381086ms: waiting for machine to come up
	I0819 12:14:05.503107 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:05.503554 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:05.503583 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:05.503501 2104949 retry.go:31] will retry after 1.11270475s: waiting for machine to come up
	I0819 12:14:06.617508 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:06.617944 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:06.617974 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:06.617885 2104949 retry.go:31] will retry after 1.096156751s: waiting for machine to come up
	I0819 12:14:07.715745 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:07.716174 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:07.716204 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:07.716121 2104949 retry.go:31] will retry after 1.73404849s: waiting for machine to come up
	I0819 12:14:09.452967 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:09.453368 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:09.453396 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:09.453311 2104949 retry.go:31] will retry after 2.304391913s: waiting for machine to come up
	I0819 12:14:11.759060 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:11.759568 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:11.759616 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:11.759546 2104949 retry.go:31] will retry after 2.82956272s: waiting for machine to come up
	I0819 12:14:14.592113 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:14.592490 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:14.592514 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:14.592435 2104949 retry.go:31] will retry after 2.648528724s: waiting for machine to come up
	I0819 12:14:17.243558 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:17.243935 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:17.243971 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:17.243885 2104949 retry.go:31] will retry after 3.594990198s: waiting for machine to come up
	I0819 12:14:20.841951 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:20.842397 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:20.842420 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:20.842372 2104949 retry.go:31] will retry after 4.780260026s: waiting for machine to come up
	I0819 12:14:25.623774 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.624126 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has current primary IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.624144 2104204 main.go:141] libmachine: (ha-584983-m03) Found IP for machine: 192.168.39.206
	I0819 12:14:25.624158 2104204 main.go:141] libmachine: (ha-584983-m03) Reserving static IP address...
	I0819 12:14:25.624625 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find host DHCP lease matching {name: "ha-584983-m03", mac: "52:54:00:5f:1d:ec", ip: "192.168.39.206"} in network mk-ha-584983
	I0819 12:14:25.703388 2104204 main.go:141] libmachine: (ha-584983-m03) Reserved static IP address: 192.168.39.206
	I0819 12:14:25.703426 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Getting to WaitForSSH function...
	I0819 12:14:25.703435 2104204 main.go:141] libmachine: (ha-584983-m03) Waiting for SSH to be available...
	I0819 12:14:25.706345 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.706741 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:25.706780 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.706907 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Using SSH client type: external
	I0819 12:14:25.706939 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa (-rw-------)
	I0819 12:14:25.706974 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 12:14:25.706987 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | About to run SSH command:
	I0819 12:14:25.707002 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | exit 0
	I0819 12:14:25.836287 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | SSH cmd err, output: <nil>: 
	I0819 12:14:25.836568 2104204 main.go:141] libmachine: (ha-584983-m03) KVM machine creation complete!
	I0819 12:14:25.836871 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetConfigRaw
	I0819 12:14:25.837476 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:25.837689 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:25.837852 2104204 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 12:14:25.837873 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetState
	I0819 12:14:25.839667 2104204 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 12:14:25.839683 2104204 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 12:14:25.839690 2104204 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 12:14:25.839696 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:25.842741 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.843150 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:25.843183 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.843325 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:25.843542 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:25.843712 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:25.843903 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:25.844074 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:14:25.844372 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0819 12:14:25.844385 2104204 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 12:14:25.947624 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:14:25.947651 2104204 main.go:141] libmachine: Detecting the provisioner...
	I0819 12:14:25.947661 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:25.950935 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.951375 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:25.951405 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.951659 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:25.951865 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:25.952044 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:25.952146 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:25.952371 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:14:25.952566 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0819 12:14:25.952579 2104204 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 12:14:26.056715 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 12:14:26.056789 2104204 main.go:141] libmachine: found compatible host: buildroot
	I0819 12:14:26.056797 2104204 main.go:141] libmachine: Provisioning with buildroot...
	I0819 12:14:26.056804 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetMachineName
	I0819 12:14:26.057069 2104204 buildroot.go:166] provisioning hostname "ha-584983-m03"
	I0819 12:14:26.057091 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetMachineName
	I0819 12:14:26.057259 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:26.060240 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.060658 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.060684 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.060856 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:26.061048 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:26.061225 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:26.061372 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:26.061522 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:14:26.061708 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0819 12:14:26.061724 2104204 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-584983-m03 && echo "ha-584983-m03" | sudo tee /etc/hostname
	I0819 12:14:26.186916 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-584983-m03
	
	I0819 12:14:26.186956 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:26.189828 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.190182 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.190217 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.190379 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:26.190629 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:26.190805 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:26.190958 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:26.191180 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:14:26.191368 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0819 12:14:26.191388 2104204 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-584983-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-584983-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-584983-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:14:26.305121 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:14:26.305153 2104204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 12:14:26.305176 2104204 buildroot.go:174] setting up certificates
	I0819 12:14:26.305188 2104204 provision.go:84] configureAuth start
	I0819 12:14:26.305197 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetMachineName
	I0819 12:14:26.305492 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:14:26.308001 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.308376 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.308412 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.308546 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:26.310734 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.311076 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.311095 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.311252 2104204 provision.go:143] copyHostCerts
	I0819 12:14:26.311286 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:14:26.311323 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 12:14:26.311332 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:14:26.311404 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 12:14:26.311474 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:14:26.311494 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 12:14:26.311498 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:14:26.311525 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 12:14:26.311573 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:14:26.311590 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 12:14:26.311596 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:14:26.311618 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 12:14:26.311669 2104204 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.ha-584983-m03 san=[127.0.0.1 192.168.39.206 ha-584983-m03 localhost minikube]
	I0819 12:14:26.569121 2104204 provision.go:177] copyRemoteCerts
	I0819 12:14:26.569187 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:14:26.569218 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:26.571873 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.572230 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.572264 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.572375 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:26.572581 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:26.572737 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:26.572838 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:14:26.654268 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 12:14:26.654339 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 12:14:26.678250 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 12:14:26.678329 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 12:14:26.701739 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 12:14:26.701809 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 12:14:26.724261 2104204 provision.go:87] duration metric: took 419.058353ms to configureAuth
	I0819 12:14:26.724291 2104204 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:14:26.724515 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:14:26.724602 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:26.727208 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.727602 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.727631 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.727768 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:26.727995 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:26.728206 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:26.728405 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:26.728612 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:14:26.728778 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0819 12:14:26.728793 2104204 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:14:26.992584 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:14:26.992619 2104204 main.go:141] libmachine: Checking connection to Docker...
	I0819 12:14:26.992631 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetURL
	I0819 12:14:26.994035 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Using libvirt version 6000000
	I0819 12:14:26.996078 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.996722 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.996790 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.996848 2104204 main.go:141] libmachine: Docker is up and running!
	I0819 12:14:26.996881 2104204 main.go:141] libmachine: Reticulating splines...
	I0819 12:14:26.996891 2104204 client.go:171] duration metric: took 26.303082645s to LocalClient.Create
	I0819 12:14:26.996917 2104204 start.go:167] duration metric: took 26.303143911s to libmachine.API.Create "ha-584983"
	I0819 12:14:26.996931 2104204 start.go:293] postStartSetup for "ha-584983-m03" (driver="kvm2")
	I0819 12:14:26.996946 2104204 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:14:26.996970 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:26.997237 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:14:26.997265 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:26.999332 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.999694 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.999723 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.999912 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:27.000115 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:27.000277 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:27.000506 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:14:27.082814 2104204 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:14:27.087172 2104204 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:14:27.087292 2104204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 12:14:27.087365 2104204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 12:14:27.087441 2104204 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 12:14:27.087452 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /etc/ssl/certs/20941932.pem
	I0819 12:14:27.087538 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:14:27.096598 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:14:27.123121 2104204 start.go:296] duration metric: took 126.170256ms for postStartSetup
	I0819 12:14:27.123184 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetConfigRaw
	I0819 12:14:27.123820 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:14:27.126742 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.127155 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:27.127176 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.127439 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:14:27.127643 2104204 start.go:128] duration metric: took 26.454483741s to createHost
	I0819 12:14:27.127669 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:27.130192 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.130594 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:27.130613 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.130817 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:27.131019 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:27.131171 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:27.131302 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:27.131452 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:14:27.131626 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0819 12:14:27.131637 2104204 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:14:27.236813 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724069667.215694086
	
	I0819 12:14:27.236837 2104204 fix.go:216] guest clock: 1724069667.215694086
	I0819 12:14:27.236845 2104204 fix.go:229] Guest: 2024-08-19 12:14:27.215694086 +0000 UTC Remote: 2024-08-19 12:14:27.127658206 +0000 UTC m=+140.170024068 (delta=88.03588ms)
	I0819 12:14:27.236861 2104204 fix.go:200] guest clock delta is within tolerance: 88.03588ms
	I0819 12:14:27.236869 2104204 start.go:83] releasing machines lock for "ha-584983-m03", held for 26.563821598s
	I0819 12:14:27.236893 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:27.237203 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:14:27.239892 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.240335 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:27.240368 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.242273 2104204 out.go:177] * Found network options:
	I0819 12:14:27.243478 2104204 out.go:177]   - NO_PROXY=192.168.39.12,192.168.39.84
	W0819 12:14:27.244531 2104204 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 12:14:27.244557 2104204 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 12:14:27.244570 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:27.245052 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:27.245224 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:27.245325 2104204 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:14:27.245360 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	W0819 12:14:27.245438 2104204 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 12:14:27.245462 2104204 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 12:14:27.245531 2104204 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:14:27.245554 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:27.248065 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.248461 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:27.248483 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.248540 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.248697 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:27.248887 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:27.248949 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:27.248970 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.249067 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:27.249154 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:27.249266 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:14:27.249316 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:27.249452 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:27.249594 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:14:27.483338 2104204 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:14:27.489552 2104204 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:14:27.489625 2104204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:14:27.506080 2104204 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 12:14:27.506102 2104204 start.go:495] detecting cgroup driver to use...
	I0819 12:14:27.506167 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:14:27.521558 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:14:27.534443 2104204 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:14:27.534504 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:14:27.547226 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:14:27.560033 2104204 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:14:27.672349 2104204 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:14:27.816992 2104204 docker.go:233] disabling docker service ...
	I0819 12:14:27.817076 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:14:27.832296 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:14:27.844787 2104204 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:14:27.961231 2104204 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:14:28.071288 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:14:28.084619 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:14:28.105116 2104204 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:14:28.105187 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:14:28.116093 2104204 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:14:28.116177 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:14:28.125944 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:14:28.135877 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:14:28.146561 2104204 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:14:28.157008 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:14:28.167047 2104204 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:14:28.184255 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:14:28.195574 2104204 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:14:28.205700 2104204 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 12:14:28.205767 2104204 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 12:14:28.219766 2104204 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:14:28.230896 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:14:28.351467 2104204 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:14:28.485037 2104204 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:14:28.485133 2104204 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:14:28.490110 2104204 start.go:563] Will wait 60s for crictl version
	I0819 12:14:28.490186 2104204 ssh_runner.go:195] Run: which crictl
	I0819 12:14:28.493774 2104204 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:14:28.537688 2104204 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:14:28.537792 2104204 ssh_runner.go:195] Run: crio --version
	I0819 12:14:28.569296 2104204 ssh_runner.go:195] Run: crio --version
	I0819 12:14:28.598186 2104204 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:14:28.599268 2104204 out.go:177]   - env NO_PROXY=192.168.39.12
	I0819 12:14:28.600300 2104204 out.go:177]   - env NO_PROXY=192.168.39.12,192.168.39.84
	I0819 12:14:28.601126 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:14:28.603797 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:28.604191 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:28.604233 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:28.604433 2104204 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:14:28.608873 2104204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:14:28.621020 2104204 mustload.go:65] Loading cluster: ha-584983
	I0819 12:14:28.621244 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:14:28.621496 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:14:28.621530 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:14:28.636419 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I0819 12:14:28.636831 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:14:28.637341 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:14:28.637364 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:14:28.637661 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:14:28.637830 2104204 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:14:28.639368 2104204 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:14:28.639669 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:14:28.639705 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:14:28.655414 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33645
	I0819 12:14:28.655810 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:14:28.656302 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:14:28.656326 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:14:28.656660 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:14:28.656871 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:14:28.657048 2104204 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983 for IP: 192.168.39.206
	I0819 12:14:28.657062 2104204 certs.go:194] generating shared ca certs ...
	I0819 12:14:28.657084 2104204 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:14:28.657227 2104204 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 12:14:28.657281 2104204 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 12:14:28.657302 2104204 certs.go:256] generating profile certs ...
	I0819 12:14:28.657402 2104204 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key
	I0819 12:14:28.657436 2104204 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e79f3b58
	I0819 12:14:28.657457 2104204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e79f3b58 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.84 192.168.39.206 192.168.39.254]
	I0819 12:14:28.773852 2104204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e79f3b58 ...
	I0819 12:14:28.773896 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e79f3b58: {Name:mk4cd36d1696e421580291e5fb6da02c4ccebbd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:14:28.774113 2104204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e79f3b58 ...
	I0819 12:14:28.774145 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e79f3b58: {Name:mk079dcb9b42e265e68c0bc488eb971145164250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:14:28.774272 2104204 certs.go:381] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e79f3b58 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt
	I0819 12:14:28.774450 2104204 certs.go:385] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e79f3b58 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key
	I0819 12:14:28.774649 2104204 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key
	I0819 12:14:28.774672 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 12:14:28.774696 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 12:14:28.774721 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 12:14:28.774743 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 12:14:28.774773 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 12:14:28.774795 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 12:14:28.774826 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 12:14:28.774848 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 12:14:28.774924 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 12:14:28.774973 2104204 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 12:14:28.774989 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:14:28.775037 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 12:14:28.775089 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:14:28.775132 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 12:14:28.775196 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:14:28.775267 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:14:28.775308 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem -> /usr/share/ca-certificates/2094193.pem
	I0819 12:14:28.775330 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /usr/share/ca-certificates/20941932.pem
	I0819 12:14:28.775381 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:14:28.778609 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:14:28.779092 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:14:28.779120 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:14:28.779305 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:14:28.779521 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:14:28.779676 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:14:28.779796 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:14:28.856515 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 12:14:28.862405 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 12:14:28.875422 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 12:14:28.881054 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 12:14:28.891797 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 12:14:28.895792 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 12:14:28.905794 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 12:14:28.909941 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0819 12:14:28.920482 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 12:14:28.924756 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 12:14:28.934368 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 12:14:28.938349 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0819 12:14:28.948602 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:14:28.973356 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:14:28.996824 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:14:29.020087 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 12:14:29.044099 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0819 12:14:29.067678 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 12:14:29.091307 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:14:29.116066 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:14:29.138928 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:14:29.161806 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 12:14:29.184865 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 12:14:29.207794 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 12:14:29.224248 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 12:14:29.239897 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 12:14:29.255874 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0819 12:14:29.271437 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 12:14:29.290155 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0819 12:14:29.307303 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 12:14:29.324624 2104204 ssh_runner.go:195] Run: openssl version
	I0819 12:14:29.330247 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 12:14:29.340676 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 12:14:29.344986 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 12:14:29.345045 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 12:14:29.350737 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:14:29.361299 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:14:29.371976 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:14:29.376477 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:14:29.376530 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:14:29.382102 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:14:29.393080 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 12:14:29.402898 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 12:14:29.406969 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 12:14:29.407019 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 12:14:29.412367 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 12:14:29.422166 2104204 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:14:29.425960 2104204 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 12:14:29.426012 2104204 kubeadm.go:934] updating node {m03 192.168.39.206 8443 v1.31.0 crio true true} ...
	I0819 12:14:29.426148 2104204 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-584983-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:14:29.426186 2104204 kube-vip.go:115] generating kube-vip config ...
	I0819 12:14:29.426219 2104204 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 12:14:29.442533 2104204 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 12:14:29.442652 2104204 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 12:14:29.442716 2104204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:14:29.452086 2104204 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 12:14:29.452139 2104204 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 12:14:29.461645 2104204 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0819 12:14:29.461666 2104204 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0819 12:14:29.461690 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 12:14:29.461703 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:14:29.461701 2104204 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 12:14:29.462120 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 12:14:29.462253 2104204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 12:14:29.462253 2104204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 12:14:29.467676 2104204 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 12:14:29.467706 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 12:14:29.482750 2104204 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 12:14:29.482780 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 12:14:29.498325 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 12:14:29.498443 2104204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 12:14:29.546603 2104204 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 12:14:29.546646 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 12:14:30.298766 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 12:14:30.309956 2104204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 12:14:30.326907 2104204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:14:30.343565 2104204 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 12:14:30.360101 2104204 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 12:14:30.363835 2104204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:14:30.376800 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:14:30.516076 2104204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:14:30.533984 2104204 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:14:30.534589 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:14:30.534646 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:14:30.550695 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I0819 12:14:30.551220 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:14:30.551752 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:14:30.551775 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:14:30.552098 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:14:30.552355 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:14:30.552546 2104204 start.go:317] joinCluster: &{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:14:30.552686 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 12:14:30.552704 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:14:30.555706 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:14:30.556186 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:14:30.556215 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:14:30.556404 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:14:30.556597 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:14:30.556787 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:14:30.556969 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:14:30.712114 2104204 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:14:30.712215 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1xu9cx.uq1y40mmh58x06ui --discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-584983-m03 --control-plane --apiserver-advertise-address=192.168.39.206 --apiserver-bind-port=8443"
	I0819 12:14:52.735810 2104204 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1xu9cx.uq1y40mmh58x06ui --discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-584983-m03 --control-plane --apiserver-advertise-address=192.168.39.206 --apiserver-bind-port=8443": (22.023557555s)
	I0819 12:14:52.735855 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 12:14:53.362522 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-584983-m03 minikube.k8s.io/updated_at=2024_08_19T12_14_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=ha-584983 minikube.k8s.io/primary=false
	I0819 12:14:53.499804 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-584983-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 12:14:53.621707 2104204 start.go:319] duration metric: took 23.069156582s to joinCluster
	I0819 12:14:53.621813 2104204 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:14:53.622191 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:14:53.623181 2104204 out.go:177] * Verifying Kubernetes components...
	I0819 12:14:53.624234 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:14:53.901854 2104204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:14:53.954902 2104204 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:14:53.955214 2104204 kapi.go:59] client config for ha-584983: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.crt", KeyFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key", CAFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 12:14:53.955290 2104204 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.12:8443
	I0819 12:14:53.955520 2104204 node_ready.go:35] waiting up to 6m0s for node "ha-584983-m03" to be "Ready" ...
	I0819 12:14:53.955617 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:53.955626 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:53.955636 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:53.955642 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:53.959497 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:54.456048 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:54.456071 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:54.456079 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:54.456082 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:54.459876 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:54.955914 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:54.955941 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:54.955949 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:54.955953 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:54.959378 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:55.456092 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:55.456123 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:55.456136 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:55.456143 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:55.459368 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:55.956329 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:55.956355 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:55.956363 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:55.956367 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:55.959453 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:55.959948 2104204 node_ready.go:53] node "ha-584983-m03" has status "Ready":"False"
	I0819 12:14:56.456394 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:56.456420 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:56.456431 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:56.456439 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:56.459564 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:56.956730 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:56.956757 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:56.956766 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:56.956778 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:56.960699 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:57.456007 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:57.456035 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:57.456048 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:57.456057 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:57.460225 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:14:57.956112 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:57.956137 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:57.956145 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:57.956150 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:57.959894 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:57.960970 2104204 node_ready.go:53] node "ha-584983-m03" has status "Ready":"False"
	I0819 12:14:58.456430 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:58.456460 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:58.456473 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:58.456480 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:58.460147 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:58.955709 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:58.955736 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:58.955745 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:58.955751 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:58.958939 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:59.455787 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:59.455810 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:59.455818 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:59.455823 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:59.458879 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:59.955810 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:59.955836 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:59.955844 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:59.955848 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:59.959134 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:00.456575 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:00.456607 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:00.456619 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:00.456630 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:00.459888 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:00.460622 2104204 node_ready.go:53] node "ha-584983-m03" has status "Ready":"False"
	I0819 12:15:00.955830 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:00.955857 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:00.955866 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:00.955869 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:00.959127 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:01.456360 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:01.456384 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:01.456393 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:01.456397 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:01.459675 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:01.956470 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:01.956496 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:01.956505 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:01.956509 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:01.959720 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:02.456600 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:02.456624 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:02.456633 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:02.456636 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:02.459651 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:02.956405 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:02.956430 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:02.956438 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:02.956442 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:02.960129 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:02.960713 2104204 node_ready.go:53] node "ha-584983-m03" has status "Ready":"False"
	I0819 12:15:03.456074 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:03.456101 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:03.456113 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:03.456120 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:03.459158 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:03.955928 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:03.955953 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:03.955962 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:03.955967 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:03.959473 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:04.456684 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:04.456709 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:04.456716 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:04.456720 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:04.459782 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:04.955826 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:04.955851 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:04.955859 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:04.955863 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:04.959157 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:05.456619 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:05.456643 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:05.456650 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:05.456654 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:05.459854 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:05.460820 2104204 node_ready.go:53] node "ha-584983-m03" has status "Ready":"False"
	I0819 12:15:05.956069 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:05.956092 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:05.956100 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:05.956106 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:05.959136 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:06.455813 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:06.455839 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:06.455848 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:06.455851 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:06.459182 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:06.956340 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:06.956363 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:06.956371 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:06.956376 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:06.960125 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:07.456415 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:07.456439 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.456452 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.456456 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.459570 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:07.956601 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:07.956630 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.956642 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.956648 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.959902 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:07.960630 2104204 node_ready.go:49] node "ha-584983-m03" has status "Ready":"True"
	I0819 12:15:07.960656 2104204 node_ready.go:38] duration metric: took 14.00511848s for node "ha-584983-m03" to be "Ready" ...
	I0819 12:15:07.960669 2104204 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:15:07.960760 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:15:07.960774 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.960785 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.960796 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.966580 2104204 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 12:15:07.973649 2104204 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jgknw" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.973735 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-jgknw
	I0819 12:15:07.973743 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.973752 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.973759 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.976531 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.977309 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:07.977324 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.977332 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.977338 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.979648 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.980271 2104204 pod_ready.go:93] pod "coredns-6f6b679f8f-jgknw" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:07.980289 2104204 pod_ready.go:82] duration metric: took 6.614781ms for pod "coredns-6f6b679f8f-jgknw" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.980297 2104204 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wll5k" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.980349 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-wll5k
	I0819 12:15:07.980356 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.980362 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.980366 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.983154 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.983855 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:07.983871 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.983878 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.983883 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.986190 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.986693 2104204 pod_ready.go:93] pod "coredns-6f6b679f8f-wll5k" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:07.986716 2104204 pod_ready.go:82] duration metric: took 6.41012ms for pod "coredns-6f6b679f8f-wll5k" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.986728 2104204 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.986796 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-584983
	I0819 12:15:07.986807 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.986817 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.986828 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.989085 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.989615 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:07.989631 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.989638 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.989642 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.991818 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.992253 2104204 pod_ready.go:93] pod "etcd-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:07.992271 2104204 pod_ready.go:82] duration metric: took 5.532521ms for pod "etcd-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.992279 2104204 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.992336 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-584983-m02
	I0819 12:15:07.992347 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.992356 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.992366 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.994606 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.995154 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:07.995167 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.995174 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.995179 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.997476 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.998086 2104204 pod_ready.go:93] pod "etcd-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:07.998106 2104204 pod_ready.go:82] duration metric: took 5.818832ms for pod "etcd-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.998117 2104204 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:08.157366 2104204 request.go:632] Waited for 159.156247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-584983-m03
	I0819 12:15:08.157430 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-584983-m03
	I0819 12:15:08.157435 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:08.157443 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:08.157447 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:08.160549 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:08.356620 2104204 request.go:632] Waited for 195.29374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:08.356695 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:08.356700 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:08.356708 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:08.356714 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:08.359573 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:08.360361 2104204 pod_ready.go:93] pod "etcd-ha-584983-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:08.360385 2104204 pod_ready.go:82] duration metric: took 362.260516ms for pod "etcd-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:08.360408 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:08.557432 2104204 request.go:632] Waited for 196.915612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983
	I0819 12:15:08.557499 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983
	I0819 12:15:08.557505 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:08.557513 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:08.557530 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:08.560573 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:08.757092 2104204 request.go:632] Waited for 195.861922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:08.757166 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:08.757172 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:08.757181 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:08.757189 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:08.760119 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:08.760622 2104204 pod_ready.go:93] pod "kube-apiserver-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:08.760643 2104204 pod_ready.go:82] duration metric: took 400.223597ms for pod "kube-apiserver-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:08.760653 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:08.956671 2104204 request.go:632] Waited for 195.921498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983-m02
	I0819 12:15:08.956739 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983-m02
	I0819 12:15:08.956746 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:08.956758 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:08.956769 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:08.960294 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:09.157632 2104204 request.go:632] Waited for 196.352944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:09.157693 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:09.157698 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:09.157707 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:09.157710 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:09.160781 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:09.161476 2104204 pod_ready.go:93] pod "kube-apiserver-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:09.161496 2104204 pod_ready.go:82] duration metric: took 400.837426ms for pod "kube-apiserver-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:09.161509 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:09.357516 2104204 request.go:632] Waited for 195.903665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983-m03
	I0819 12:15:09.357586 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983-m03
	I0819 12:15:09.357594 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:09.357602 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:09.357608 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:09.361067 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:09.557354 2104204 request.go:632] Waited for 195.250619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:09.557416 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:09.557421 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:09.557429 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:09.557434 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:09.560616 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:09.561242 2104204 pod_ready.go:93] pod "kube-apiserver-ha-584983-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:09.561260 2104204 pod_ready.go:82] duration metric: took 399.742194ms for pod "kube-apiserver-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:09.561275 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:09.757401 2104204 request.go:632] Waited for 195.982712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983
	I0819 12:15:09.757524 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983
	I0819 12:15:09.757539 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:09.757550 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:09.757555 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:09.761959 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:15:09.956990 2104204 request.go:632] Waited for 194.349441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:09.957075 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:09.957084 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:09.957097 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:09.957106 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:09.960308 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:09.960973 2104204 pod_ready.go:93] pod "kube-controller-manager-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:09.960996 2104204 pod_ready.go:82] duration metric: took 399.711575ms for pod "kube-controller-manager-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:09.961006 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:10.157545 2104204 request.go:632] Waited for 196.457169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983-m02
	I0819 12:15:10.157644 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983-m02
	I0819 12:15:10.157653 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:10.157661 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:10.157666 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:10.160852 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:10.357043 2104204 request.go:632] Waited for 195.359572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:10.357124 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:10.357131 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:10.357140 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:10.357144 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:10.360230 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:10.360778 2104204 pod_ready.go:93] pod "kube-controller-manager-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:10.360799 2104204 pod_ready.go:82] duration metric: took 399.786556ms for pod "kube-controller-manager-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:10.360808 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:10.556938 2104204 request.go:632] Waited for 196.047517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983-m03
	I0819 12:15:10.557022 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983-m03
	I0819 12:15:10.557028 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:10.557037 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:10.557043 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:10.560177 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:10.757474 2104204 request.go:632] Waited for 196.333304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:10.757550 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:10.757562 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:10.757579 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:10.757584 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:10.763131 2104204 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 12:15:10.763676 2104204 pod_ready.go:93] pod "kube-controller-manager-ha-584983-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:10.763703 2104204 pod_ready.go:82] duration metric: took 402.887961ms for pod "kube-controller-manager-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:10.763717 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cwlpd" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:10.956681 2104204 request.go:632] Waited for 192.846899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwlpd
	I0819 12:15:10.956762 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwlpd
	I0819 12:15:10.956768 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:10.956776 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:10.956780 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:10.960560 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:11.156751 2104204 request.go:632] Waited for 195.296467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:11.156825 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:11.156833 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:11.156845 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:11.156855 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:11.159753 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:11.160444 2104204 pod_ready.go:93] pod "kube-proxy-cwlpd" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:11.160465 2104204 pod_ready.go:82] duration metric: took 396.73822ms for pod "kube-proxy-cwlpd" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:11.160479 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f4pns" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:11.357618 2104204 request.go:632] Waited for 197.026377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f4pns
	I0819 12:15:11.357691 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f4pns
	I0819 12:15:11.357698 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:11.357710 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:11.357720 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:11.360860 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:11.557593 2104204 request.go:632] Waited for 196.016248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:11.557661 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:11.557666 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:11.557674 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:11.557679 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:11.560725 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:11.561312 2104204 pod_ready.go:93] pod "kube-proxy-f4pns" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:11.561334 2104204 pod_ready.go:82] duration metric: took 400.848183ms for pod "kube-proxy-f4pns" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:11.561344 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w7x44" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:11.757341 2104204 request.go:632] Waited for 195.909579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7x44
	I0819 12:15:11.757419 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7x44
	I0819 12:15:11.757424 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:11.757433 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:11.757437 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:11.760923 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:11.957181 2104204 request.go:632] Waited for 195.376815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:11.957280 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:11.957292 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:11.957302 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:11.957308 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:11.960296 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:11.960985 2104204 pod_ready.go:93] pod "kube-proxy-w7x44" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:11.961011 2104204 pod_ready.go:82] duration metric: took 399.658569ms for pod "kube-proxy-w7x44" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:11.961023 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:12.156616 2104204 request.go:632] Waited for 195.506599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983
	I0819 12:15:12.156694 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983
	I0819 12:15:12.156702 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:12.156714 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:12.156724 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:12.159566 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:12.357553 2104204 request.go:632] Waited for 197.442609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:12.357654 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:12.357665 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:12.357677 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:12.357688 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:12.360676 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:12.361282 2104204 pod_ready.go:93] pod "kube-scheduler-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:12.361303 2104204 pod_ready.go:82] duration metric: took 400.27142ms for pod "kube-scheduler-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:12.361316 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:12.557368 2104204 request.go:632] Waited for 195.948401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983-m02
	I0819 12:15:12.557431 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983-m02
	I0819 12:15:12.557437 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:12.557447 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:12.557453 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:12.560413 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:12.757364 2104204 request.go:632] Waited for 196.253239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:12.757423 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:12.757428 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:12.757439 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:12.757444 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:12.760447 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:12.761072 2104204 pod_ready.go:93] pod "kube-scheduler-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:12.761091 2104204 pod_ready.go:82] duration metric: took 399.766426ms for pod "kube-scheduler-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:12.761106 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:12.957072 2104204 request.go:632] Waited for 195.883758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983-m03
	I0819 12:15:12.957165 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983-m03
	I0819 12:15:12.957182 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:12.957193 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:12.957203 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:12.960674 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:13.156767 2104204 request.go:632] Waited for 195.280531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:13.156833 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:13.156838 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:13.156845 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:13.156849 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:13.159878 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:13.160537 2104204 pod_ready.go:93] pod "kube-scheduler-ha-584983-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:13.160559 2104204 pod_ready.go:82] duration metric: took 399.441894ms for pod "kube-scheduler-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:13.160574 2104204 pod_ready.go:39] duration metric: took 5.199891414s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:15:13.160596 2104204 api_server.go:52] waiting for apiserver process to appear ...
	I0819 12:15:13.160665 2104204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:15:13.175919 2104204 api_server.go:72] duration metric: took 19.554062808s to wait for apiserver process to appear ...
	I0819 12:15:13.175950 2104204 api_server.go:88] waiting for apiserver healthz status ...
	I0819 12:15:13.175977 2104204 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0819 12:15:13.182965 2104204 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
	ok
	I0819 12:15:13.183048 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/version
	I0819 12:15:13.183060 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:13.183075 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:13.183080 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:13.183893 2104204 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 12:15:13.183971 2104204 api_server.go:141] control plane version: v1.31.0
	I0819 12:15:13.183987 2104204 api_server.go:131] duration metric: took 8.029755ms to wait for apiserver health ...
	I0819 12:15:13.183995 2104204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 12:15:13.357361 2104204 request.go:632] Waited for 173.252198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:15:13.357422 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:15:13.357427 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:13.357435 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:13.357441 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:13.365236 2104204 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 12:15:13.371401 2104204 system_pods.go:59] 24 kube-system pods found
	I0819 12:15:13.371429 2104204 system_pods.go:61] "coredns-6f6b679f8f-jgknw" [7f0e1948-9d0c-49fd-aff6-d23c2d691e7d] Running
	I0819 12:15:13.371434 2104204 system_pods.go:61] "coredns-6f6b679f8f-wll5k" [8383b1b6-e917-411e-834f-13db39309ec2] Running
	I0819 12:15:13.371438 2104204 system_pods.go:61] "etcd-ha-584983" [32edd388-69fe-49c6-8d9b-f1fd7bbcd7b5] Running
	I0819 12:15:13.371442 2104204 system_pods.go:61] "etcd-ha-584983-m02" [eee1abee-b052-42b7-9401-623d23097668] Running
	I0819 12:15:13.371445 2104204 system_pods.go:61] "etcd-ha-584983-m03" [0b45768f-dbac-4e37-998c-8b7f46b894be] Running
	I0819 12:15:13.371454 2104204 system_pods.go:61] "kindnet-2llhd" [7b7f2267-915d-4346-857d-ce2721e90df5] Running
	I0819 12:15:13.371459 2104204 system_pods.go:61] "kindnet-dg8d6" [92ae558c-b6ea-4900-b534-ded9e58c1e96] Running
	I0819 12:15:13.371463 2104204 system_pods.go:61] "kindnet-pb989" [3bc8d323-6389-4720-b63a-63aa2717c2bd] Running
	I0819 12:15:13.371466 2104204 system_pods.go:61] "kube-apiserver-ha-584983" [5a547a01-8a0e-48f9-b1a5-b2337e3c79a2] Running
	I0819 12:15:13.371469 2104204 system_pods.go:61] "kube-apiserver-ha-584983-m02" [375be956-72f0-4e64-aa5e-572472963021] Running
	I0819 12:15:13.371473 2104204 system_pods.go:61] "kube-apiserver-ha-584983-m03" [bbba7718-8608-4d49-8a91-ca6e38bb0336] Running
	I0819 12:15:13.371476 2104204 system_pods.go:61] "kube-controller-manager-ha-584983" [b0c9e96e-637a-4011-b983-8ab78dbed853] Running
	I0819 12:15:13.371479 2104204 system_pods.go:61] "kube-controller-manager-ha-584983-m02" [e02c4c61-cbb5-456c-92f3-50e1988a0569] Running
	I0819 12:15:13.371483 2104204 system_pods.go:61] "kube-controller-manager-ha-584983-m03" [4368f2cb-2795-4d23-9aaa-627f9dc78f5e] Running
	I0819 12:15:13.371489 2104204 system_pods.go:61] "kube-proxy-cwlpd" [6c237025-3826-4cb3-bd67-6600c73e2153] Running
	I0819 12:15:13.371492 2104204 system_pods.go:61] "kube-proxy-f4pns" [fcbbba62-1d0c-4cf7-af69-cd041672818a] Running
	I0819 12:15:13.371495 2104204 system_pods.go:61] "kube-proxy-w7x44" [cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5] Running
	I0819 12:15:13.371498 2104204 system_pods.go:61] "kube-scheduler-ha-584983" [c4350173-0322-4268-b40c-ab76f38b2dc9] Running
	I0819 12:15:13.371501 2104204 system_pods.go:61] "kube-scheduler-ha-584983-m02" [c3d5440f-d2bb-4f22-8b62-86765206ec07] Running
	I0819 12:15:13.371504 2104204 system_pods.go:61] "kube-scheduler-ha-584983-m03" [2b919943-a75c-46fb-ba29-6ee8baf0426e] Running
	I0819 12:15:13.371507 2104204 system_pods.go:61] "kube-vip-ha-584983" [ed94966c-ea7e-4d24-88c3-2a5106f418d4] Running
	I0819 12:15:13.371510 2104204 system_pods.go:61] "kube-vip-ha-584983-m02" [2a456354-eb5d-45d1-89b3-911050797334] Running
	I0819 12:15:13.371513 2104204 system_pods.go:61] "kube-vip-ha-584983-m03" [534bd230-4210-4ff2-a529-cc23ee355ffb] Running
	I0819 12:15:13.371515 2104204 system_pods.go:61] "storage-provisioner" [8b48e339-40f9-4517-b17d-1290ded5456e] Running
	I0819 12:15:13.371522 2104204 system_pods.go:74] duration metric: took 187.519025ms to wait for pod list to return data ...
	I0819 12:15:13.371531 2104204 default_sa.go:34] waiting for default service account to be created ...
	I0819 12:15:13.556989 2104204 request.go:632] Waited for 185.366877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0819 12:15:13.557063 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0819 12:15:13.557071 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:13.557079 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:13.557086 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:13.560927 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:13.561076 2104204 default_sa.go:45] found service account: "default"
	I0819 12:15:13.561095 2104204 default_sa.go:55] duration metric: took 189.555851ms for default service account to be created ...
	I0819 12:15:13.561108 2104204 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 12:15:13.757173 2104204 request.go:632] Waited for 195.960052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:15:13.757245 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:15:13.757252 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:13.757263 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:13.757272 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:13.761728 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:15:13.768371 2104204 system_pods.go:86] 24 kube-system pods found
	I0819 12:15:13.768402 2104204 system_pods.go:89] "coredns-6f6b679f8f-jgknw" [7f0e1948-9d0c-49fd-aff6-d23c2d691e7d] Running
	I0819 12:15:13.768411 2104204 system_pods.go:89] "coredns-6f6b679f8f-wll5k" [8383b1b6-e917-411e-834f-13db39309ec2] Running
	I0819 12:15:13.768417 2104204 system_pods.go:89] "etcd-ha-584983" [32edd388-69fe-49c6-8d9b-f1fd7bbcd7b5] Running
	I0819 12:15:13.768423 2104204 system_pods.go:89] "etcd-ha-584983-m02" [eee1abee-b052-42b7-9401-623d23097668] Running
	I0819 12:15:13.768429 2104204 system_pods.go:89] "etcd-ha-584983-m03" [0b45768f-dbac-4e37-998c-8b7f46b894be] Running
	I0819 12:15:13.768435 2104204 system_pods.go:89] "kindnet-2llhd" [7b7f2267-915d-4346-857d-ce2721e90df5] Running
	I0819 12:15:13.768445 2104204 system_pods.go:89] "kindnet-dg8d6" [92ae558c-b6ea-4900-b534-ded9e58c1e96] Running
	I0819 12:15:13.768457 2104204 system_pods.go:89] "kindnet-pb989" [3bc8d323-6389-4720-b63a-63aa2717c2bd] Running
	I0819 12:15:13.768463 2104204 system_pods.go:89] "kube-apiserver-ha-584983" [5a547a01-8a0e-48f9-b1a5-b2337e3c79a2] Running
	I0819 12:15:13.768472 2104204 system_pods.go:89] "kube-apiserver-ha-584983-m02" [375be956-72f0-4e64-aa5e-572472963021] Running
	I0819 12:15:13.768481 2104204 system_pods.go:89] "kube-apiserver-ha-584983-m03" [bbba7718-8608-4d49-8a91-ca6e38bb0336] Running
	I0819 12:15:13.768489 2104204 system_pods.go:89] "kube-controller-manager-ha-584983" [b0c9e96e-637a-4011-b983-8ab78dbed853] Running
	I0819 12:15:13.768493 2104204 system_pods.go:89] "kube-controller-manager-ha-584983-m02" [e02c4c61-cbb5-456c-92f3-50e1988a0569] Running
	I0819 12:15:13.768500 2104204 system_pods.go:89] "kube-controller-manager-ha-584983-m03" [4368f2cb-2795-4d23-9aaa-627f9dc78f5e] Running
	I0819 12:15:13.768508 2104204 system_pods.go:89] "kube-proxy-cwlpd" [6c237025-3826-4cb3-bd67-6600c73e2153] Running
	I0819 12:15:13.768514 2104204 system_pods.go:89] "kube-proxy-f4pns" [fcbbba62-1d0c-4cf7-af69-cd041672818a] Running
	I0819 12:15:13.768519 2104204 system_pods.go:89] "kube-proxy-w7x44" [cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5] Running
	I0819 12:15:13.768525 2104204 system_pods.go:89] "kube-scheduler-ha-584983" [c4350173-0322-4268-b40c-ab76f38b2dc9] Running
	I0819 12:15:13.768528 2104204 system_pods.go:89] "kube-scheduler-ha-584983-m02" [c3d5440f-d2bb-4f22-8b62-86765206ec07] Running
	I0819 12:15:13.768534 2104204 system_pods.go:89] "kube-scheduler-ha-584983-m03" [2b919943-a75c-46fb-ba29-6ee8baf0426e] Running
	I0819 12:15:13.768538 2104204 system_pods.go:89] "kube-vip-ha-584983" [ed94966c-ea7e-4d24-88c3-2a5106f418d4] Running
	I0819 12:15:13.768545 2104204 system_pods.go:89] "kube-vip-ha-584983-m02" [2a456354-eb5d-45d1-89b3-911050797334] Running
	I0819 12:15:13.768549 2104204 system_pods.go:89] "kube-vip-ha-584983-m03" [534bd230-4210-4ff2-a529-cc23ee355ffb] Running
	I0819 12:15:13.768555 2104204 system_pods.go:89] "storage-provisioner" [8b48e339-40f9-4517-b17d-1290ded5456e] Running
	I0819 12:15:13.768561 2104204 system_pods.go:126] duration metric: took 207.44777ms to wait for k8s-apps to be running ...
	I0819 12:15:13.768571 2104204 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 12:15:13.768623 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:15:13.784297 2104204 system_svc.go:56] duration metric: took 15.714034ms WaitForService to wait for kubelet
	I0819 12:15:13.784328 2104204 kubeadm.go:582] duration metric: took 20.162478235s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:15:13.784356 2104204 node_conditions.go:102] verifying NodePressure condition ...
	I0819 12:15:13.956764 2104204 request.go:632] Waited for 172.282363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes
	I0819 12:15:13.956830 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes
	I0819 12:15:13.956837 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:13.956849 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:13.956859 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:13.960451 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:13.961376 2104204 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 12:15:13.961397 2104204 node_conditions.go:123] node cpu capacity is 2
	I0819 12:15:13.961410 2104204 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 12:15:13.961414 2104204 node_conditions.go:123] node cpu capacity is 2
	I0819 12:15:13.961417 2104204 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 12:15:13.961421 2104204 node_conditions.go:123] node cpu capacity is 2
	I0819 12:15:13.961424 2104204 node_conditions.go:105] duration metric: took 177.063761ms to run NodePressure ...
	I0819 12:15:13.961436 2104204 start.go:241] waiting for startup goroutines ...
	I0819 12:15:13.961461 2104204 start.go:255] writing updated cluster config ...
	I0819 12:15:13.961749 2104204 ssh_runner.go:195] Run: rm -f paused
	I0819 12:15:14.014910 2104204 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 12:15:14.016961 2104204 out.go:177] * Done! kubectl is now configured to use "ha-584983" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.372080034Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069929372058598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0008fc8a-bdb0-4507-af03-8e5608285b45 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.372607543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59c0e397-77e1-480c-8a8c-52bf68b5238d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.372660556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59c0e397-77e1-480c-8a8c-52bf68b5238d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.372885171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069716321576219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586380218262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586331821904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a144ce90fec4a3b9641d2b329d74eea7d24853fc8e12a26ce60dccdd8cd534,PodSandboxId:ce13f99cb40a3735c5d8b29338c4df855cda711e7b6ddbeb55524a7fbfc40fe2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724069586228951996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724069574416775079,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406957
2336617782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993c24f05aeae1e90f7f0d7497ad21ef61f5591c51c27f0b57cb784b5c205744,PodSandboxId:a59ca849892dfd95d5e1a304407e55aaccdf6cbc1f3a27135389823bbc93f643,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406956230
6258863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9795400a82b7a6e66117dbdcd869ddc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069560632009013,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7957c6abca008ceb73562acb3a07469d02c969d7d57c6b92056e92768484729d,PodSandboxId:9c0652a3e0b8ecf15b79d44ec5322029418a078cc4844f48249fe928cf7b0aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069560556942735,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65afb8623fcea98f124cd8de3fb6f430c09728b1f5a6ec5fa0ce329353270857,PodSandboxId:2ce334e1bfd5f239f65e692452112744a87327fbd727509d862cc03f54516de4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069560598208864,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069560496327269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59c0e397-77e1-480c-8a8c-52bf68b5238d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.410297255Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2bcaf6b-ce60-4cf1-aabd-97eede32864b name=/runtime.v1.RuntimeService/Version
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.410368313Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2bcaf6b-ce60-4cf1-aabd-97eede32864b name=/runtime.v1.RuntimeService/Version
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.411679958Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab86997e-690a-4123-9dce-c4441643cb14 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.412135829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069929412112036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab86997e-690a-4123-9dce-c4441643cb14 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.412767200Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5266fcbe-0151-4557-b98c-4b7e721196e4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.412818895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5266fcbe-0151-4557-b98c-4b7e721196e4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.413220132Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069716321576219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586380218262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586331821904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a144ce90fec4a3b9641d2b329d74eea7d24853fc8e12a26ce60dccdd8cd534,PodSandboxId:ce13f99cb40a3735c5d8b29338c4df855cda711e7b6ddbeb55524a7fbfc40fe2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724069586228951996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724069574416775079,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406957
2336617782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993c24f05aeae1e90f7f0d7497ad21ef61f5591c51c27f0b57cb784b5c205744,PodSandboxId:a59ca849892dfd95d5e1a304407e55aaccdf6cbc1f3a27135389823bbc93f643,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406956230
6258863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9795400a82b7a6e66117dbdcd869ddc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069560632009013,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7957c6abca008ceb73562acb3a07469d02c969d7d57c6b92056e92768484729d,PodSandboxId:9c0652a3e0b8ecf15b79d44ec5322029418a078cc4844f48249fe928cf7b0aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069560556942735,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65afb8623fcea98f124cd8de3fb6f430c09728b1f5a6ec5fa0ce329353270857,PodSandboxId:2ce334e1bfd5f239f65e692452112744a87327fbd727509d862cc03f54516de4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069560598208864,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069560496327269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5266fcbe-0151-4557-b98c-4b7e721196e4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.450412790Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60c4b38e-7781-4fbe-8a46-381767dc0cf3 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.450574760Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60c4b38e-7781-4fbe-8a46-381767dc0cf3 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.451755333Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6b7c794-744c-453a-a8b7-55c215dad71c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.452207918Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069929452186519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6b7c794-744c-453a-a8b7-55c215dad71c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.452854442Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=531bd4c5-2b30-4aa7-adfe-3d452b3e0e7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.452912091Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=531bd4c5-2b30-4aa7-adfe-3d452b3e0e7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.453137350Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069716321576219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586380218262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586331821904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a144ce90fec4a3b9641d2b329d74eea7d24853fc8e12a26ce60dccdd8cd534,PodSandboxId:ce13f99cb40a3735c5d8b29338c4df855cda711e7b6ddbeb55524a7fbfc40fe2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724069586228951996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724069574416775079,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406957
2336617782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993c24f05aeae1e90f7f0d7497ad21ef61f5591c51c27f0b57cb784b5c205744,PodSandboxId:a59ca849892dfd95d5e1a304407e55aaccdf6cbc1f3a27135389823bbc93f643,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406956230
6258863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9795400a82b7a6e66117dbdcd869ddc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069560632009013,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7957c6abca008ceb73562acb3a07469d02c969d7d57c6b92056e92768484729d,PodSandboxId:9c0652a3e0b8ecf15b79d44ec5322029418a078cc4844f48249fe928cf7b0aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069560556942735,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65afb8623fcea98f124cd8de3fb6f430c09728b1f5a6ec5fa0ce329353270857,PodSandboxId:2ce334e1bfd5f239f65e692452112744a87327fbd727509d862cc03f54516de4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069560598208864,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069560496327269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=531bd4c5-2b30-4aa7-adfe-3d452b3e0e7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.488825269Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd0bb198-4503-4fb7-babe-f00e449dd572 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.488899918Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd0bb198-4503-4fb7-babe-f00e449dd572 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.490260429Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4069d9a-ad87-484e-a6c6-ab2d6a9a8c7c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.490964874Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069929490938058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4069d9a-ad87-484e-a6c6-ab2d6a9a8c7c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.492750867Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77f45932-d057-4e42-beb9-c5d6500aa8df name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.492831987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77f45932-d057-4e42-beb9-c5d6500aa8df name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:18:49 ha-584983 crio[678]: time="2024-08-19 12:18:49.493322905Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069716321576219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586380218262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586331821904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a144ce90fec4a3b9641d2b329d74eea7d24853fc8e12a26ce60dccdd8cd534,PodSandboxId:ce13f99cb40a3735c5d8b29338c4df855cda711e7b6ddbeb55524a7fbfc40fe2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724069586228951996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724069574416775079,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406957
2336617782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993c24f05aeae1e90f7f0d7497ad21ef61f5591c51c27f0b57cb784b5c205744,PodSandboxId:a59ca849892dfd95d5e1a304407e55aaccdf6cbc1f3a27135389823bbc93f643,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406956230
6258863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9795400a82b7a6e66117dbdcd869ddc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069560632009013,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7957c6abca008ceb73562acb3a07469d02c969d7d57c6b92056e92768484729d,PodSandboxId:9c0652a3e0b8ecf15b79d44ec5322029418a078cc4844f48249fe928cf7b0aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069560556942735,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65afb8623fcea98f124cd8de3fb6f430c09728b1f5a6ec5fa0ce329353270857,PodSandboxId:2ce334e1bfd5f239f65e692452112744a87327fbd727509d862cc03f54516de4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069560598208864,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069560496327269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77f45932-d057-4e42-beb9-c5d6500aa8df name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c12861dc1b990       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   f1429b3feda56       busybox-7dff88458-s954n
	8f808fb6e4a29       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   4d323e7bde365       coredns-6f6b679f8f-jgknw
	214bc022395ba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   9fff4f7ac43f0       coredns-6f6b679f8f-wll5k
	c6a144ce90fec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   ce13f99cb40a3       storage-provisioner
	668ac1e0d9a1b       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    5 minutes ago       Running             kindnet-cni               0                   a9564aacd2862       kindnet-dg8d6
	6bcc112384a64       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                0                   f34234b07799c       kube-proxy-w7x44
	993c24f05aeae       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   a59ca849892df       kube-vip-ha-584983
	799883f5e6773       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      6 minutes ago       Running             kube-scheduler            0                   22491c1bbcc37       kube-scheduler-ha-584983
	65afb8623fcea       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      6 minutes ago       Running             kube-apiserver            0                   2ce334e1bfd5f       kube-apiserver-ha-584983
	7957c6abca008       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      6 minutes ago       Running             kube-controller-manager   0                   9c0652a3e0b8e       kube-controller-manager-ha-584983
	e2afc8c50fc96       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   f3aa1be314868       etcd-ha-584983
	
	
	==> coredns [214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577] <==
	[INFO] 10.244.1.2:57350 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000149447s
	[INFO] 10.244.1.2:55845 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000072579s
	[INFO] 10.244.2.2:37261 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.045080098s
	[INFO] 10.244.2.2:51973 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000220525s
	[INFO] 10.244.2.2:34558 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142147s
	[INFO] 10.244.2.2:33150 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000263427s
	[INFO] 10.244.0.4:58833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122286s
	[INFO] 10.244.0.4:37194 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129784s
	[INFO] 10.244.1.2:34921 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001829614s
	[INFO] 10.244.1.2:53910 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115811s
	[INFO] 10.244.1.2:37254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008251s
	[INFO] 10.244.2.2:43347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012466s
	[INFO] 10.244.2.2:51494 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000210784s
	[INFO] 10.244.2.2:36691 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100028s
	[INFO] 10.244.0.4:39140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124263s
	[INFO] 10.244.0.4:51765 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099031s
	[INFO] 10.244.0.4:47207 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071543s
	[INFO] 10.244.0.4:37638 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047976s
	[INFO] 10.244.1.2:36335 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080226s
	[INFO] 10.244.2.2:35526 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107074s
	[INFO] 10.244.2.2:52104 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134936s
	[INFO] 10.244.0.4:37351 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075755s
	[INFO] 10.244.0.4:47268 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121675s
	[INFO] 10.244.1.2:55348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134871s
	[INFO] 10.244.1.2:40841 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064834s
	
	
	==> coredns [8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43] <==
	[INFO] 10.244.2.2:59285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158842s
	[INFO] 10.244.2.2:55773 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000212182s
	[INFO] 10.244.2.2:34121 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214046s
	[INFO] 10.244.2.2:56792 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.042778287s
	[INFO] 10.244.0.4:59540 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001673972s
	[INFO] 10.244.0.4:42038 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221674s
	[INFO] 10.244.0.4:46483 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00106179s
	[INFO] 10.244.0.4:48125 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091001s
	[INFO] 10.244.0.4:37333 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069922s
	[INFO] 10.244.0.4:50093 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066014s
	[INFO] 10.244.1.2:35478 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106276s
	[INFO] 10.244.1.2:32788 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077319s
	[INFO] 10.244.1.2:48500 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062887s
	[INFO] 10.244.1.2:52624 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001380815s
	[INFO] 10.244.1.2:35526 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000215556s
	[INFO] 10.244.2.2:42664 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000169406s
	[INFO] 10.244.1.2:47636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107912s
	[INFO] 10.244.1.2:57549 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000279969s
	[INFO] 10.244.1.2:37605 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079457s
	[INFO] 10.244.2.2:59708 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107128s
	[INFO] 10.244.2.2:54205 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121358s
	[INFO] 10.244.0.4:57068 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104078s
	[INFO] 10.244.0.4:48200 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000268832s
	[INFO] 10.244.1.2:48471 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138271s
	[INFO] 10.244.1.2:34249 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107161s
	
	
	==> describe nodes <==
	Name:               ha-584983
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-584983
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=ha-584983
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T12_12_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:12:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-584983
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:18:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:15:21 +0000   Mon, 19 Aug 2024 12:12:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:15:21 +0000   Mon, 19 Aug 2024 12:12:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:15:21 +0000   Mon, 19 Aug 2024 12:12:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:15:21 +0000   Mon, 19 Aug 2024 12:13:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    ha-584983
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9cd8edc3a2c24fa5a0144251d1079f81
	  System UUID:                9cd8edc3-a2c2-4fa5-a014-4251d1079f81
	  Boot ID:                    c299daa9-50a5-44ae-ad64-3cd1ed4c7fa4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s954n              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 coredns-6f6b679f8f-jgknw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     5m58s
	  kube-system                 coredns-6f6b679f8f-wll5k             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     5m58s
	  kube-system                 etcd-ha-584983                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m1s
	  kube-system                 kindnet-dg8d6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m58s
	  kube-system                 kube-apiserver-ha-584983             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 kube-controller-manager-ha-584983    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 kube-proxy-w7x44                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-scheduler-ha-584983             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 kube-vip-ha-584983                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m57s  kube-proxy       
	  Normal  Starting                 6m1s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m1s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m1s   kubelet          Node ha-584983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s   kubelet          Node ha-584983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s   kubelet          Node ha-584983 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m59s  node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	  Normal  NodeReady                5m44s  kubelet          Node ha-584983 status is now: NodeReady
	  Normal  RegisteredNode           5m2s   node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	  Normal  RegisteredNode           3m51s  node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	
	
	Name:               ha-584983-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-584983-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=ha-584983
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_13_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:13:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-584983-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:16:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 12:15:40 +0000   Mon, 19 Aug 2024 12:17:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 12:15:40 +0000   Mon, 19 Aug 2024 12:17:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 12:15:40 +0000   Mon, 19 Aug 2024 12:17:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 12:15:40 +0000   Mon, 19 Aug 2024 12:17:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-584983-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c94b4594d79f4ec0a8dd96aab1847eb5
	  System UUID:                c94b4594-d79f-4ec0-a8dd-96aab1847eb5
	  Boot ID:                    28d9d5da-0b2e-4025-a636-4d1384214658
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xjjwj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 etcd-ha-584983-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m11s
	  kube-system                 kindnet-pb989                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m11s
	  kube-system                 kube-apiserver-ha-584983-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-controller-manager-ha-584983-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-proxy-cwlpd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-scheduler-ha-584983-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-vip-ha-584983-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m6s                   kube-proxy       
	  Normal  CIDRAssignmentFailed     5m11s                  cidrAllocator    Node ha-584983-m02 status is now: CIDRAssignmentFailed
	  Normal  Starting                 5m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m11s (x8 over 5m11s)  kubelet          Node ha-584983-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m11s (x8 over 5m11s)  kubelet          Node ha-584983-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m11s (x7 over 5m11s)  kubelet          Node ha-584983-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  RegisteredNode           5m2s                   node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-584983-m02 status is now: NodeNotReady
	
	
	Name:               ha-584983-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-584983-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=ha-584983
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_14_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:14:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-584983-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:18:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:15:19 +0000   Mon, 19 Aug 2024 12:14:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:15:19 +0000   Mon, 19 Aug 2024 12:14:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:15:19 +0000   Mon, 19 Aug 2024 12:14:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:15:19 +0000   Mon, 19 Aug 2024 12:15:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    ha-584983-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3194fc0bd2184b688dfa8865a1e17091
	  System UUID:                3194fc0b-d218-4b68-8dfa-8865a1e17091
	  Boot ID:                    320ac109-a844-470c-bcd1-726d71c129c2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hxh55                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 etcd-ha-584983-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m59s
	  kube-system                 kindnet-2llhd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m
	  kube-system                 kube-apiserver-ha-584983-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-controller-manager-ha-584983-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 kube-proxy-f4pns                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-scheduler-ha-584983-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 kube-vip-ha-584983-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m1s (x8 over 4m1s)  kubelet          Node ha-584983-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x8 over 4m1s)  kubelet          Node ha-584983-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x7 over 4m1s)  kubelet          Node ha-584983-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     4m                   cidrAllocator    Node ha-584983-m03 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-584983-m03 event: Registered Node ha-584983-m03 in Controller
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-584983-m03 event: Registered Node ha-584983-m03 in Controller
	  Normal  RegisteredNode           3m51s                node-controller  Node ha-584983-m03 event: Registered Node ha-584983-m03 in Controller
	
	
	Name:               ha-584983-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-584983-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=ha-584983
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_15_50_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:15:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-584983-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:18:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:16:20 +0000   Mon, 19 Aug 2024 12:15:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:16:20 +0000   Mon, 19 Aug 2024 12:15:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:16:20 +0000   Mon, 19 Aug 2024 12:15:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:16:20 +0000   Mon, 19 Aug 2024 12:16:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-584983-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cfad94302a8543c8a125516d743187f8
	  System UUID:                cfad9430-2a85-43c8-a125-516d743187f8
	  Boot ID:                    1a8eea34-dade-46ac-a76a-f1a3f4167ca5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-cjfsg       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m59s
	  kube-system                 kube-proxy-rd7nb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                 From             Message
	  ----    ------                   ----                ----             -------
	  Normal  Starting                 2m55s               kube-proxy       
	  Normal  Starting                 3m                  kubelet          Starting kubelet.
	  Normal  CIDRAssignmentFailed     2m59s               cidrAllocator    Node ha-584983-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           2m59s               node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal  NodeHasSufficientMemory  2m59s (x2 over 3m)  kubelet          Node ha-584983-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m59s (x2 over 3m)  kubelet          Node ha-584983-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m59s (x2 over 3m)  kubelet          Node ha-584983-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m59s               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m56s               node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal  RegisteredNode           2m56s               node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal  NodeReady                2m41s               kubelet          Node ha-584983-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug19 12:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050355] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.755816] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.506391] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.876325] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.203525] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.059214] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056195] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.190463] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.123484] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.274294] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +3.959189] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +3.909497] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.056139] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.254427] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.090121] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.241682] kauditd_printk_skb: 60 callbacks suppressed
	[Aug19 12:13] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7] <==
	{"level":"warn","ts":"2024-08-19T12:18:49.743166Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.747906Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.754492Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.759745Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.765168Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.775146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.782692Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.789974Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.795084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.798141Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.804007Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.811123Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.817193Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.820549Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.821711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.825070Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.832623Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.838760Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.841692Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.844402Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.848362Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.851549Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.854960Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.860982Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:18:49.866746Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:18:49 up 6 min,  0 users,  load average: 0.11, 0.22, 0.12
	Linux ha-584983 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb] <==
	I0819 12:18:15.487939       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:18:25.494887       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:18:25.494959       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	I0819 12:18:25.495169       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:18:25.495202       1 main.go:299] handling current node
	I0819 12:18:25.495216       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:18:25.495223       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:18:25.495347       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0819 12:18:25.495377       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:18:35.488217       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:18:35.488400       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	I0819 12:18:35.488724       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:18:35.488863       1 main.go:299] handling current node
	I0819 12:18:35.488967       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:18:35.489067       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:18:35.489519       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0819 12:18:35.489604       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:18:45.493045       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:18:45.493127       1 main.go:299] handling current node
	I0819 12:18:45.493170       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:18:45.493683       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:18:45.493854       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0819 12:18:45.493882       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:18:45.493946       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:18:45.493965       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [65afb8623fcea98f124cd8de3fb6f430c09728b1f5a6ec5fa0ce329353270857] <==
	I0819 12:12:45.469745       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0819 12:12:45.477529       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.12]
	I0819 12:12:45.478248       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 12:12:45.482578       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 12:12:45.871703       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 12:12:48.509730       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 12:12:48.528203       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 12:12:48.546402       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 12:12:51.272848       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0819 12:12:51.523834       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0819 12:15:17.846800       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48890: use of closed network connection
	E0819 12:15:18.034092       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48898: use of closed network connection
	E0819 12:15:18.294235       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48928: use of closed network connection
	E0819 12:15:18.489646       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48944: use of closed network connection
	E0819 12:15:18.683068       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48962: use of closed network connection
	E0819 12:15:18.902889       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48966: use of closed network connection
	E0819 12:15:19.084147       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48986: use of closed network connection
	E0819 12:15:19.273632       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49006: use of closed network connection
	E0819 12:15:19.573387       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49032: use of closed network connection
	E0819 12:15:19.749854       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49048: use of closed network connection
	E0819 12:15:19.994936       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49062: use of closed network connection
	E0819 12:15:20.183769       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49084: use of closed network connection
	E0819 12:15:20.359973       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49114: use of closed network connection
	E0819 12:15:20.535070       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49124: use of closed network connection
	W0819 12:16:45.493254       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.12 192.168.39.206]
	
	
	==> kube-controller-manager [7957c6abca008ceb73562acb3a07469d02c969d7d57c6b92056e92768484729d] <==
	E0819 12:15:50.370786       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-584983-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-584983-m04"
	E0819 12:15:50.370841       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-584983-m04': failed to patch node CIDR: Node \"ha-584983-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0819 12:15:50.370866       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:50.375889       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:50.627552       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:50.639724       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-584983-m04"
	I0819 12:15:50.690363       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:51.194285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:53.047490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:53.102953       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:53.550923       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:53.583740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:16:00.546236       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:16:08.465628       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:16:08.465869       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-584983-m04"
	I0819 12:16:08.480501       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:16:08.566943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:16:20.931898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:17:03.591636       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m02"
	I0819 12:17:03.592083       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-584983-m04"
	I0819 12:17:03.610802       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m02"
	I0819 12:17:03.760316       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.134832ms"
	I0819 12:17:03.760412       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.069µs"
	I0819 12:17:05.691211       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m02"
	I0819 12:17:08.776298       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m02"
	
	
	==> kube-proxy [6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:12:52.545163       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 12:12:52.554518       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.12"]
	E0819 12:12:52.554600       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:12:52.597599       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 12:12:52.597654       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 12:12:52.597687       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:12:52.600184       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:12:52.600537       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:12:52.600566       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:12:52.601770       1 config.go:197] "Starting service config controller"
	I0819 12:12:52.601819       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:12:52.601848       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:12:52.601852       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:12:52.602324       1 config.go:326] "Starting node config controller"
	I0819 12:12:52.602351       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:12:52.702526       1 shared_informer.go:320] Caches are synced for node config
	I0819 12:12:52.702572       1 shared_informer.go:320] Caches are synced for service config
	I0819 12:12:52.702598       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94] <==
	W0819 12:12:43.898326       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 12:12:43.898365       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 12:12:44.779190       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 12:12:44.779336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:44.796142       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 12:12:44.796220       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:44.825714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 12:12:44.825862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:44.899414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 12:12:44.899621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:44.943911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 12:12:44.944030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:44.996827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 12:12:44.996989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:45.070635       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 12:12:45.070724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:45.161155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 12:12:45.161289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 12:12:45.489324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 12:15:14.882392       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hxh55\": pod busybox-7dff88458-hxh55 is already assigned to node \"ha-584983-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hxh55" node="ha-584983-m03"
	E0819 12:15:14.883871       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hxh55\": pod busybox-7dff88458-hxh55 is already assigned to node \"ha-584983-m03\"" pod="default/busybox-7dff88458-hxh55"
	E0819 12:15:50.520891       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fbckf\": pod kube-proxy-fbckf is already assigned to node \"ha-584983-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fbckf" node="ha-584983-m04"
	E0819 12:15:50.521012       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d49d2a08-95ae-4ebd-a52e-3d0575bfd9f5(kube-system/kube-proxy-fbckf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fbckf"
	E0819 12:15:50.521045       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fbckf\": pod kube-proxy-fbckf is already assigned to node \"ha-584983-m04\"" pod="kube-system/kube-proxy-fbckf"
	I0819 12:15:50.521064       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fbckf" node="ha-584983-m04"
	
	
	==> kubelet <==
	Aug 19 12:17:38 ha-584983 kubelet[1315]: E0819 12:17:38.552558    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069858551656577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:17:48 ha-584983 kubelet[1315]: E0819 12:17:48.452016    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 12:17:48 ha-584983 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 12:17:48 ha-584983 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 12:17:48 ha-584983 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 12:17:48 ha-584983 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 12:17:48 ha-584983 kubelet[1315]: E0819 12:17:48.554092    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069868553628579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:17:48 ha-584983 kubelet[1315]: E0819 12:17:48.554139    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069868553628579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:17:58 ha-584983 kubelet[1315]: E0819 12:17:58.555842    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069878555320258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:17:58 ha-584983 kubelet[1315]: E0819 12:17:58.555876    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069878555320258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:08 ha-584983 kubelet[1315]: E0819 12:18:08.558336    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069888558007351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:08 ha-584983 kubelet[1315]: E0819 12:18:08.558364    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069888558007351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:18 ha-584983 kubelet[1315]: E0819 12:18:18.561293    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069898560909410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:18 ha-584983 kubelet[1315]: E0819 12:18:18.561358    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069898560909410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:28 ha-584983 kubelet[1315]: E0819 12:18:28.566132    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069908563879326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:28 ha-584983 kubelet[1315]: E0819 12:18:28.566185    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069908563879326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:38 ha-584983 kubelet[1315]: E0819 12:18:38.567516    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069918567101073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:38 ha-584983 kubelet[1315]: E0819 12:18:38.567540    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069918567101073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:48 ha-584983 kubelet[1315]: E0819 12:18:48.454868    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 12:18:48 ha-584983 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 12:18:48 ha-584983 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 12:18:48 ha-584983 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 12:18:48 ha-584983 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 12:18:48 ha-584983 kubelet[1315]: E0819 12:18:48.569693    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069928568942365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:48 ha-584983 kubelet[1315]: E0819 12:18:48.569730    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069928568942365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-584983 -n ha-584983
helpers_test.go:261: (dbg) Run:  kubectl --context ha-584983 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (51.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr: exit status 3 (3.18822496s)

                                                
                                                
-- stdout --
	ha-584983
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-584983-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:18:54.420873 2109382 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:18:54.421122 2109382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:18:54.421134 2109382 out.go:358] Setting ErrFile to fd 2...
	I0819 12:18:54.421138 2109382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:18:54.421353 2109382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:18:54.421561 2109382 out.go:352] Setting JSON to false
	I0819 12:18:54.421591 2109382 mustload.go:65] Loading cluster: ha-584983
	I0819 12:18:54.421703 2109382 notify.go:220] Checking for updates...
	I0819 12:18:54.422120 2109382 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:18:54.422144 2109382 status.go:255] checking status of ha-584983 ...
	I0819 12:18:54.422650 2109382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:54.422728 2109382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:54.440569 2109382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0819 12:18:54.441015 2109382 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:54.441671 2109382 main.go:141] libmachine: Using API Version  1
	I0819 12:18:54.441700 2109382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:54.442112 2109382 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:54.442347 2109382 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:18:54.443843 2109382 status.go:330] ha-584983 host status = "Running" (err=<nil>)
	I0819 12:18:54.443868 2109382 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:18:54.444193 2109382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:54.444232 2109382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:54.458856 2109382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34653
	I0819 12:18:54.459319 2109382 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:54.459796 2109382 main.go:141] libmachine: Using API Version  1
	I0819 12:18:54.459820 2109382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:54.460139 2109382 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:54.460373 2109382 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:18:54.463556 2109382 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:18:54.464060 2109382 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:18:54.464094 2109382 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:18:54.464253 2109382 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:18:54.464582 2109382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:54.464638 2109382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:54.479253 2109382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42603
	I0819 12:18:54.479641 2109382 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:54.480187 2109382 main.go:141] libmachine: Using API Version  1
	I0819 12:18:54.480213 2109382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:54.480580 2109382 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:54.480813 2109382 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:18:54.481024 2109382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:18:54.481068 2109382 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:18:54.484003 2109382 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:18:54.484445 2109382 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:18:54.484479 2109382 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:18:54.484632 2109382 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:18:54.484796 2109382 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:18:54.484963 2109382 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:18:54.485087 2109382 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:18:54.567779 2109382 ssh_runner.go:195] Run: systemctl --version
	I0819 12:18:54.574670 2109382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:18:54.590020 2109382 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:18:54.590054 2109382 api_server.go:166] Checking apiserver status ...
	I0819 12:18:54.590088 2109382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:18:54.603402 2109382 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0819 12:18:54.612400 2109382 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:18:54.612452 2109382 ssh_runner.go:195] Run: ls
	I0819 12:18:54.616848 2109382 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:18:54.622127 2109382 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:18:54.622146 2109382 status.go:422] ha-584983 apiserver status = Running (err=<nil>)
	I0819 12:18:54.622156 2109382 status.go:257] ha-584983 status: &{Name:ha-584983 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:18:54.622173 2109382 status.go:255] checking status of ha-584983-m02 ...
	I0819 12:18:54.622474 2109382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:54.622519 2109382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:54.637697 2109382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44011
	I0819 12:18:54.638094 2109382 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:54.638565 2109382 main.go:141] libmachine: Using API Version  1
	I0819 12:18:54.638583 2109382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:54.638906 2109382 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:54.639086 2109382 main.go:141] libmachine: (ha-584983-m02) Calling .GetState
	I0819 12:18:54.640543 2109382 status.go:330] ha-584983-m02 host status = "Running" (err=<nil>)
	I0819 12:18:54.640561 2109382 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:18:54.640830 2109382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:54.640862 2109382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:54.655184 2109382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42265
	I0819 12:18:54.655616 2109382 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:54.656069 2109382 main.go:141] libmachine: Using API Version  1
	I0819 12:18:54.656092 2109382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:54.656425 2109382 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:54.656614 2109382 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:18:54.659497 2109382 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:18:54.659908 2109382 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:18:54.659937 2109382 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:18:54.660099 2109382 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:18:54.660473 2109382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:54.660509 2109382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:54.675112 2109382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33211
	I0819 12:18:54.675529 2109382 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:54.675960 2109382 main.go:141] libmachine: Using API Version  1
	I0819 12:18:54.675980 2109382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:54.676369 2109382 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:54.676570 2109382 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:18:54.676754 2109382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:18:54.676773 2109382 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:18:54.679550 2109382 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:18:54.679953 2109382 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:18:54.679983 2109382 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:18:54.680133 2109382 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:18:54.680324 2109382 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:18:54.680515 2109382 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:18:54.680646 2109382 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	W0819 12:18:57.228448 2109382 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.84:22: connect: no route to host
	W0819 12:18:57.228553 2109382 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	E0819 12:18:57.228571 2109382 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:18:57.228580 2109382 status.go:257] ha-584983-m02 status: &{Name:ha-584983-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 12:18:57.228601 2109382 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:18:57.228608 2109382 status.go:255] checking status of ha-584983-m03 ...
	I0819 12:18:57.228935 2109382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:57.228977 2109382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:57.244770 2109382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36527
	I0819 12:18:57.245266 2109382 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:57.245751 2109382 main.go:141] libmachine: Using API Version  1
	I0819 12:18:57.245776 2109382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:57.246111 2109382 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:57.246282 2109382 main.go:141] libmachine: (ha-584983-m03) Calling .GetState
	I0819 12:18:57.247740 2109382 status.go:330] ha-584983-m03 host status = "Running" (err=<nil>)
	I0819 12:18:57.247757 2109382 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:18:57.248039 2109382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:57.248072 2109382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:57.262813 2109382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40739
	I0819 12:18:57.263193 2109382 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:57.263611 2109382 main.go:141] libmachine: Using API Version  1
	I0819 12:18:57.263631 2109382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:57.263937 2109382 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:57.264110 2109382 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:18:57.266707 2109382 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:18:57.267113 2109382 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:18:57.267138 2109382 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:18:57.267244 2109382 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:18:57.267635 2109382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:57.267681 2109382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:57.282343 2109382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45389
	I0819 12:18:57.282786 2109382 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:57.283279 2109382 main.go:141] libmachine: Using API Version  1
	I0819 12:18:57.283299 2109382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:57.283660 2109382 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:57.283875 2109382 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:18:57.284101 2109382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:18:57.284126 2109382 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:18:57.287304 2109382 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:18:57.287851 2109382 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:18:57.287880 2109382 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:18:57.288085 2109382 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:18:57.288310 2109382 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:18:57.288478 2109382 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:18:57.288639 2109382 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:18:57.368189 2109382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:18:57.382395 2109382 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:18:57.382441 2109382 api_server.go:166] Checking apiserver status ...
	I0819 12:18:57.382487 2109382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:18:57.395184 2109382 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup
	W0819 12:18:57.404287 2109382 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:18:57.404382 2109382 ssh_runner.go:195] Run: ls
	I0819 12:18:57.408879 2109382 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:18:57.413490 2109382 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:18:57.413517 2109382 status.go:422] ha-584983-m03 apiserver status = Running (err=<nil>)
	I0819 12:18:57.413529 2109382 status.go:257] ha-584983-m03 status: &{Name:ha-584983-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:18:57.413549 2109382 status.go:255] checking status of ha-584983-m04 ...
	I0819 12:18:57.413923 2109382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:57.413996 2109382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:57.429861 2109382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I0819 12:18:57.430351 2109382 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:57.430876 2109382 main.go:141] libmachine: Using API Version  1
	I0819 12:18:57.430895 2109382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:57.431222 2109382 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:57.431446 2109382 main.go:141] libmachine: (ha-584983-m04) Calling .GetState
	I0819 12:18:57.433152 2109382 status.go:330] ha-584983-m04 host status = "Running" (err=<nil>)
	I0819 12:18:57.433170 2109382 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:18:57.433496 2109382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:57.433539 2109382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:57.448407 2109382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37101
	I0819 12:18:57.448760 2109382 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:57.449206 2109382 main.go:141] libmachine: Using API Version  1
	I0819 12:18:57.449228 2109382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:57.449521 2109382 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:57.449699 2109382 main.go:141] libmachine: (ha-584983-m04) Calling .GetIP
	I0819 12:18:57.452278 2109382 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:18:57.452676 2109382 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:18:57.452693 2109382 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:18:57.452838 2109382 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:18:57.453157 2109382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:57.453192 2109382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:57.468823 2109382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43271
	I0819 12:18:57.469280 2109382 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:57.469746 2109382 main.go:141] libmachine: Using API Version  1
	I0819 12:18:57.469770 2109382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:57.470067 2109382 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:57.470226 2109382 main.go:141] libmachine: (ha-584983-m04) Calling .DriverName
	I0819 12:18:57.470417 2109382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:18:57.470439 2109382 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHHostname
	I0819 12:18:57.472596 2109382 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:18:57.472994 2109382 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:18:57.473021 2109382 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:18:57.473199 2109382 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHPort
	I0819 12:18:57.473373 2109382 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHKeyPath
	I0819 12:18:57.473603 2109382 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHUsername
	I0819 12:18:57.473745 2109382 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m04/id_rsa Username:docker}
	I0819 12:18:57.551740 2109382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:18:57.566235 2109382 status.go:257] ha-584983-m04 status: &{Name:ha-584983-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr
E0819 12:19:02.188795 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr: exit status 3 (5.262735731s)

                                                
                                                
-- stdout --
	ha-584983
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-584983-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:18:58.513629 2109482 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:18:58.513733 2109482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:18:58.513740 2109482 out.go:358] Setting ErrFile to fd 2...
	I0819 12:18:58.513743 2109482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:18:58.513905 2109482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:18:58.514079 2109482 out.go:352] Setting JSON to false
	I0819 12:18:58.514109 2109482 mustload.go:65] Loading cluster: ha-584983
	I0819 12:18:58.514228 2109482 notify.go:220] Checking for updates...
	I0819 12:18:58.514535 2109482 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:18:58.514555 2109482 status.go:255] checking status of ha-584983 ...
	I0819 12:18:58.514988 2109482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:58.515047 2109482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:58.530647 2109482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38581
	I0819 12:18:58.531054 2109482 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:58.531589 2109482 main.go:141] libmachine: Using API Version  1
	I0819 12:18:58.531617 2109482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:58.531949 2109482 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:58.532134 2109482 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:18:58.533700 2109482 status.go:330] ha-584983 host status = "Running" (err=<nil>)
	I0819 12:18:58.533716 2109482 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:18:58.534010 2109482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:58.534045 2109482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:58.548853 2109482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41967
	I0819 12:18:58.549258 2109482 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:58.549747 2109482 main.go:141] libmachine: Using API Version  1
	I0819 12:18:58.549771 2109482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:58.550077 2109482 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:58.550266 2109482 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:18:58.553024 2109482 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:18:58.553424 2109482 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:18:58.553452 2109482 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:18:58.553583 2109482 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:18:58.553884 2109482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:58.553919 2109482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:58.569646 2109482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0819 12:18:58.570114 2109482 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:58.570623 2109482 main.go:141] libmachine: Using API Version  1
	I0819 12:18:58.570644 2109482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:58.570937 2109482 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:58.571097 2109482 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:18:58.571296 2109482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:18:58.571322 2109482 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:18:58.573817 2109482 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:18:58.574234 2109482 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:18:58.574272 2109482 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:18:58.574399 2109482 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:18:58.574579 2109482 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:18:58.574794 2109482 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:18:58.574951 2109482 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:18:58.668782 2109482 ssh_runner.go:195] Run: systemctl --version
	I0819 12:18:58.674831 2109482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:18:58.689252 2109482 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:18:58.689296 2109482 api_server.go:166] Checking apiserver status ...
	I0819 12:18:58.689346 2109482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:18:58.702862 2109482 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0819 12:18:58.712472 2109482 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:18:58.712544 2109482 ssh_runner.go:195] Run: ls
	I0819 12:18:58.718091 2109482 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:18:58.723964 2109482 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:18:58.723986 2109482 status.go:422] ha-584983 apiserver status = Running (err=<nil>)
	I0819 12:18:58.723998 2109482 status.go:257] ha-584983 status: &{Name:ha-584983 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:18:58.724038 2109482 status.go:255] checking status of ha-584983-m02 ...
	I0819 12:18:58.724404 2109482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:58.724449 2109482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:58.739615 2109482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35635
	I0819 12:18:58.740065 2109482 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:58.740562 2109482 main.go:141] libmachine: Using API Version  1
	I0819 12:18:58.740582 2109482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:58.740895 2109482 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:58.741104 2109482 main.go:141] libmachine: (ha-584983-m02) Calling .GetState
	I0819 12:18:58.742367 2109482 status.go:330] ha-584983-m02 host status = "Running" (err=<nil>)
	I0819 12:18:58.742392 2109482 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:18:58.742698 2109482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:58.742744 2109482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:58.758273 2109482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35743
	I0819 12:18:58.758635 2109482 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:58.759157 2109482 main.go:141] libmachine: Using API Version  1
	I0819 12:18:58.759178 2109482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:58.759488 2109482 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:58.759702 2109482 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:18:58.762572 2109482 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:18:58.763083 2109482 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:18:58.763109 2109482 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:18:58.763293 2109482 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:18:58.763598 2109482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:18:58.763633 2109482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:18:58.778136 2109482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37653
	I0819 12:18:58.778504 2109482 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:18:58.778954 2109482 main.go:141] libmachine: Using API Version  1
	I0819 12:18:58.778975 2109482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:18:58.779282 2109482 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:18:58.779487 2109482 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:18:58.779678 2109482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:18:58.779701 2109482 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:18:58.782601 2109482 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:18:58.783059 2109482 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:18:58.783080 2109482 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:18:58.783276 2109482 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:18:58.783472 2109482 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:18:58.783637 2109482 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:18:58.783780 2109482 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	W0819 12:19:00.304467 2109482 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:19:00.304544 2109482 retry.go:31] will retry after 237.402976ms: dial tcp 192.168.39.84:22: connect: no route to host
	W0819 12:19:03.372407 2109482 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.84:22: connect: no route to host
	W0819 12:19:03.372533 2109482 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	E0819 12:19:03.372553 2109482 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:19:03.372560 2109482 status.go:257] ha-584983-m02 status: &{Name:ha-584983-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 12:19:03.372591 2109482 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:19:03.372605 2109482 status.go:255] checking status of ha-584983-m03 ...
	I0819 12:19:03.372945 2109482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:03.373019 2109482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:03.390000 2109482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0819 12:19:03.390482 2109482 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:03.391005 2109482 main.go:141] libmachine: Using API Version  1
	I0819 12:19:03.391036 2109482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:03.391415 2109482 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:03.391622 2109482 main.go:141] libmachine: (ha-584983-m03) Calling .GetState
	I0819 12:19:03.393149 2109482 status.go:330] ha-584983-m03 host status = "Running" (err=<nil>)
	I0819 12:19:03.393166 2109482 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:19:03.393469 2109482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:03.393506 2109482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:03.409455 2109482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0819 12:19:03.409872 2109482 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:03.410290 2109482 main.go:141] libmachine: Using API Version  1
	I0819 12:19:03.410315 2109482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:03.410614 2109482 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:03.410758 2109482 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:19:03.413446 2109482 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:03.413892 2109482 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:19:03.413928 2109482 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:03.414050 2109482 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:19:03.414350 2109482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:03.414388 2109482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:03.429686 2109482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0819 12:19:03.430114 2109482 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:03.430603 2109482 main.go:141] libmachine: Using API Version  1
	I0819 12:19:03.430624 2109482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:03.430909 2109482 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:03.431114 2109482 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:19:03.431288 2109482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:03.431309 2109482 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:19:03.434325 2109482 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:03.434756 2109482 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:19:03.434788 2109482 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:03.434909 2109482 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:19:03.435099 2109482 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:19:03.435280 2109482 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:19:03.435433 2109482 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:19:03.515698 2109482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:03.531450 2109482 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:19:03.531483 2109482 api_server.go:166] Checking apiserver status ...
	I0819 12:19:03.531518 2109482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:19:03.550499 2109482 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup
	W0819 12:19:03.559705 2109482 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:19:03.559765 2109482 ssh_runner.go:195] Run: ls
	I0819 12:19:03.564653 2109482 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:19:03.569421 2109482 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:19:03.569449 2109482 status.go:422] ha-584983-m03 apiserver status = Running (err=<nil>)
	I0819 12:19:03.569458 2109482 status.go:257] ha-584983-m03 status: &{Name:ha-584983-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:19:03.569475 2109482 status.go:255] checking status of ha-584983-m04 ...
	I0819 12:19:03.569766 2109482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:03.569801 2109482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:03.585628 2109482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33561
	I0819 12:19:03.586035 2109482 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:03.586497 2109482 main.go:141] libmachine: Using API Version  1
	I0819 12:19:03.586519 2109482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:03.586878 2109482 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:03.587053 2109482 main.go:141] libmachine: (ha-584983-m04) Calling .GetState
	I0819 12:19:03.588763 2109482 status.go:330] ha-584983-m04 host status = "Running" (err=<nil>)
	I0819 12:19:03.588785 2109482 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:19:03.589065 2109482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:03.589098 2109482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:03.604179 2109482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0819 12:19:03.604577 2109482 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:03.605081 2109482 main.go:141] libmachine: Using API Version  1
	I0819 12:19:03.605115 2109482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:03.605469 2109482 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:03.605687 2109482 main.go:141] libmachine: (ha-584983-m04) Calling .GetIP
	I0819 12:19:03.608591 2109482 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:03.608968 2109482 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:19:03.608986 2109482 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:03.609131 2109482 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:19:03.609471 2109482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:03.609510 2109482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:03.625364 2109482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I0819 12:19:03.625785 2109482 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:03.626315 2109482 main.go:141] libmachine: Using API Version  1
	I0819 12:19:03.626338 2109482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:03.626673 2109482 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:03.626913 2109482 main.go:141] libmachine: (ha-584983-m04) Calling .DriverName
	I0819 12:19:03.627110 2109482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:03.627133 2109482 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHHostname
	I0819 12:19:03.629976 2109482 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:03.630534 2109482 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:19:03.630557 2109482 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:03.630729 2109482 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHPort
	I0819 12:19:03.630888 2109482 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHKeyPath
	I0819 12:19:03.631024 2109482 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHUsername
	I0819 12:19:03.631153 2109482 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m04/id_rsa Username:docker}
	I0819 12:19:03.715348 2109482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:03.729619 2109482 status.go:257] ha-584983-m04 status: &{Name:ha-584983-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr: exit status 3 (5.156062726s)

                                                
                                                
-- stdout --
	ha-584983
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-584983-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:19:04.751022 2109583 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:19:04.751441 2109583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:19:04.751456 2109583 out.go:358] Setting ErrFile to fd 2...
	I0819 12:19:04.751467 2109583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:19:04.751702 2109583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:19:04.751867 2109583 out.go:352] Setting JSON to false
	I0819 12:19:04.751896 2109583 mustload.go:65] Loading cluster: ha-584983
	I0819 12:19:04.751933 2109583 notify.go:220] Checking for updates...
	I0819 12:19:04.752306 2109583 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:19:04.752324 2109583 status.go:255] checking status of ha-584983 ...
	I0819 12:19:04.752710 2109583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:04.752761 2109583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:04.770689 2109583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33771
	I0819 12:19:04.771083 2109583 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:04.771634 2109583 main.go:141] libmachine: Using API Version  1
	I0819 12:19:04.771654 2109583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:04.772128 2109583 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:04.772383 2109583 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:19:04.774049 2109583 status.go:330] ha-584983 host status = "Running" (err=<nil>)
	I0819 12:19:04.774066 2109583 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:19:04.774536 2109583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:04.774580 2109583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:04.790018 2109583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0819 12:19:04.790414 2109583 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:04.790825 2109583 main.go:141] libmachine: Using API Version  1
	I0819 12:19:04.790851 2109583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:04.791162 2109583 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:04.791368 2109583 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:19:04.794174 2109583 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:04.794608 2109583 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:19:04.794644 2109583 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:04.794800 2109583 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:19:04.795094 2109583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:04.795129 2109583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:04.811417 2109583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38699
	I0819 12:19:04.811782 2109583 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:04.812315 2109583 main.go:141] libmachine: Using API Version  1
	I0819 12:19:04.812338 2109583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:04.812666 2109583 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:04.812898 2109583 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:19:04.813080 2109583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:04.813106 2109583 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:19:04.815823 2109583 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:04.816233 2109583 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:19:04.816265 2109583 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:04.816385 2109583 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:19:04.816561 2109583 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:19:04.816723 2109583 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:19:04.816882 2109583 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:19:04.899951 2109583 ssh_runner.go:195] Run: systemctl --version
	I0819 12:19:04.906646 2109583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:04.921329 2109583 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:19:04.921370 2109583 api_server.go:166] Checking apiserver status ...
	I0819 12:19:04.921423 2109583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:19:04.935370 2109583 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0819 12:19:04.944441 2109583 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:19:04.944501 2109583 ssh_runner.go:195] Run: ls
	I0819 12:19:04.948869 2109583 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:19:04.954058 2109583 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:19:04.954089 2109583 status.go:422] ha-584983 apiserver status = Running (err=<nil>)
	I0819 12:19:04.954104 2109583 status.go:257] ha-584983 status: &{Name:ha-584983 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:19:04.954137 2109583 status.go:255] checking status of ha-584983-m02 ...
	I0819 12:19:04.954588 2109583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:04.954642 2109583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:04.971483 2109583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0819 12:19:04.971963 2109583 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:04.972458 2109583 main.go:141] libmachine: Using API Version  1
	I0819 12:19:04.972481 2109583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:04.972802 2109583 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:04.973018 2109583 main.go:141] libmachine: (ha-584983-m02) Calling .GetState
	I0819 12:19:04.974528 2109583 status.go:330] ha-584983-m02 host status = "Running" (err=<nil>)
	I0819 12:19:04.974547 2109583 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:19:04.974860 2109583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:04.974893 2109583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:04.989656 2109583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41763
	I0819 12:19:04.990014 2109583 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:04.990534 2109583 main.go:141] libmachine: Using API Version  1
	I0819 12:19:04.990558 2109583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:04.990832 2109583 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:04.991019 2109583 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:19:04.993545 2109583 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:04.993944 2109583 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:19:04.993973 2109583 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:04.994103 2109583 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:19:04.994390 2109583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:04.994436 2109583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:05.009382 2109583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I0819 12:19:05.009741 2109583 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:05.010172 2109583 main.go:141] libmachine: Using API Version  1
	I0819 12:19:05.010193 2109583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:05.010522 2109583 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:05.010721 2109583 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:19:05.010903 2109583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:05.010920 2109583 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:19:05.013517 2109583 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:05.013899 2109583 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:19:05.013920 2109583 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:05.014068 2109583 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:19:05.014241 2109583 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:19:05.014403 2109583 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:19:05.014529 2109583 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	W0819 12:19:06.444528 2109583 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:19:06.444587 2109583 retry.go:31] will retry after 270.980536ms: dial tcp 192.168.39.84:22: connect: no route to host
	W0819 12:19:09.516503 2109583 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.84:22: connect: no route to host
	W0819 12:19:09.516616 2109583 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	E0819 12:19:09.516642 2109583 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:19:09.516653 2109583 status.go:257] ha-584983-m02 status: &{Name:ha-584983-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 12:19:09.516693 2109583 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:19:09.516707 2109583 status.go:255] checking status of ha-584983-m03 ...
	I0819 12:19:09.517068 2109583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:09.517128 2109583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:09.532762 2109583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I0819 12:19:09.533195 2109583 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:09.533718 2109583 main.go:141] libmachine: Using API Version  1
	I0819 12:19:09.533741 2109583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:09.534071 2109583 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:09.534257 2109583 main.go:141] libmachine: (ha-584983-m03) Calling .GetState
	I0819 12:19:09.535945 2109583 status.go:330] ha-584983-m03 host status = "Running" (err=<nil>)
	I0819 12:19:09.535962 2109583 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:19:09.536414 2109583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:09.536459 2109583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:09.550997 2109583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44125
	I0819 12:19:09.551395 2109583 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:09.551878 2109583 main.go:141] libmachine: Using API Version  1
	I0819 12:19:09.551901 2109583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:09.552265 2109583 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:09.552420 2109583 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:19:09.555260 2109583 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:09.555765 2109583 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:19:09.555786 2109583 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:09.555956 2109583 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:19:09.556293 2109583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:09.556334 2109583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:09.571677 2109583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39201
	I0819 12:19:09.572046 2109583 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:09.572512 2109583 main.go:141] libmachine: Using API Version  1
	I0819 12:19:09.572532 2109583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:09.572820 2109583 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:09.573032 2109583 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:19:09.573235 2109583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:09.573276 2109583 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:19:09.575957 2109583 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:09.576382 2109583 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:19:09.576429 2109583 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:09.576542 2109583 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:19:09.576708 2109583 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:19:09.576846 2109583 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:19:09.576958 2109583 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:19:09.659958 2109583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:09.675500 2109583 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:19:09.675533 2109583 api_server.go:166] Checking apiserver status ...
	I0819 12:19:09.675578 2109583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:19:09.689536 2109583 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup
	W0819 12:19:09.699630 2109583 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:19:09.699696 2109583 ssh_runner.go:195] Run: ls
	I0819 12:19:09.704138 2109583 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:19:09.709089 2109583 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:19:09.709114 2109583 status.go:422] ha-584983-m03 apiserver status = Running (err=<nil>)
	I0819 12:19:09.709125 2109583 status.go:257] ha-584983-m03 status: &{Name:ha-584983-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:19:09.709148 2109583 status.go:255] checking status of ha-584983-m04 ...
	I0819 12:19:09.709457 2109583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:09.709501 2109583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:09.724931 2109583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36677
	I0819 12:19:09.725396 2109583 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:09.725869 2109583 main.go:141] libmachine: Using API Version  1
	I0819 12:19:09.725892 2109583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:09.726186 2109583 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:09.726385 2109583 main.go:141] libmachine: (ha-584983-m04) Calling .GetState
	I0819 12:19:09.727832 2109583 status.go:330] ha-584983-m04 host status = "Running" (err=<nil>)
	I0819 12:19:09.727849 2109583 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:19:09.728121 2109583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:09.728151 2109583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:09.743266 2109583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41519
	I0819 12:19:09.743669 2109583 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:09.744138 2109583 main.go:141] libmachine: Using API Version  1
	I0819 12:19:09.744158 2109583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:09.744490 2109583 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:09.744698 2109583 main.go:141] libmachine: (ha-584983-m04) Calling .GetIP
	I0819 12:19:09.747338 2109583 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:09.747815 2109583 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:19:09.747848 2109583 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:09.747979 2109583 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:19:09.748343 2109583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:09.748382 2109583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:09.764391 2109583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45547
	I0819 12:19:09.764789 2109583 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:09.765308 2109583 main.go:141] libmachine: Using API Version  1
	I0819 12:19:09.765327 2109583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:09.765651 2109583 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:09.765862 2109583 main.go:141] libmachine: (ha-584983-m04) Calling .DriverName
	I0819 12:19:09.766031 2109583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:09.766053 2109583 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHHostname
	I0819 12:19:09.768590 2109583 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:09.769050 2109583 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:19:09.769070 2109583 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:09.769228 2109583 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHPort
	I0819 12:19:09.769419 2109583 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHKeyPath
	I0819 12:19:09.769582 2109583 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHUsername
	I0819 12:19:09.769746 2109583 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m04/id_rsa Username:docker}
	I0819 12:19:09.847873 2109583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:09.862625 2109583 status.go:257] ha-584983-m04 status: &{Name:ha-584983-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0819 12:19:10.529436 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr: exit status 3 (3.742486506s)

                                                
                                                
-- stdout --
	ha-584983
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-584983-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:19:12.994216 2109683 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:19:12.994498 2109683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:19:12.994510 2109683 out.go:358] Setting ErrFile to fd 2...
	I0819 12:19:12.994514 2109683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:19:12.994713 2109683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:19:12.994877 2109683 out.go:352] Setting JSON to false
	I0819 12:19:12.994904 2109683 mustload.go:65] Loading cluster: ha-584983
	I0819 12:19:12.995002 2109683 notify.go:220] Checking for updates...
	I0819 12:19:12.995338 2109683 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:19:12.995357 2109683 status.go:255] checking status of ha-584983 ...
	I0819 12:19:12.995984 2109683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:12.996037 2109683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:13.012219 2109683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38251
	I0819 12:19:13.012696 2109683 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:13.013265 2109683 main.go:141] libmachine: Using API Version  1
	I0819 12:19:13.013301 2109683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:13.013699 2109683 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:13.013931 2109683 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:19:13.015557 2109683 status.go:330] ha-584983 host status = "Running" (err=<nil>)
	I0819 12:19:13.015576 2109683 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:19:13.015871 2109683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:13.015916 2109683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:13.030828 2109683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37027
	I0819 12:19:13.031219 2109683 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:13.031648 2109683 main.go:141] libmachine: Using API Version  1
	I0819 12:19:13.031670 2109683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:13.031924 2109683 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:13.032196 2109683 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:19:13.034714 2109683 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:13.035157 2109683 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:19:13.035182 2109683 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:13.035289 2109683 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:19:13.035598 2109683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:13.035651 2109683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:13.050981 2109683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44521
	I0819 12:19:13.051415 2109683 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:13.051913 2109683 main.go:141] libmachine: Using API Version  1
	I0819 12:19:13.051937 2109683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:13.052238 2109683 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:13.052430 2109683 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:19:13.052582 2109683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:13.052607 2109683 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:19:13.055384 2109683 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:13.055871 2109683 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:19:13.055895 2109683 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:13.056019 2109683 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:19:13.056207 2109683 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:19:13.056339 2109683 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:19:13.056456 2109683 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:19:13.140479 2109683 ssh_runner.go:195] Run: systemctl --version
	I0819 12:19:13.147022 2109683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:13.162836 2109683 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:19:13.162874 2109683 api_server.go:166] Checking apiserver status ...
	I0819 12:19:13.162907 2109683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:19:13.177095 2109683 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0819 12:19:13.187638 2109683 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:19:13.187702 2109683 ssh_runner.go:195] Run: ls
	I0819 12:19:13.192679 2109683 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:19:13.200059 2109683 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:19:13.200085 2109683 status.go:422] ha-584983 apiserver status = Running (err=<nil>)
	I0819 12:19:13.200096 2109683 status.go:257] ha-584983 status: &{Name:ha-584983 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:19:13.200114 2109683 status.go:255] checking status of ha-584983-m02 ...
	I0819 12:19:13.200451 2109683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:13.200489 2109683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:13.216205 2109683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39301
	I0819 12:19:13.216683 2109683 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:13.217146 2109683 main.go:141] libmachine: Using API Version  1
	I0819 12:19:13.217172 2109683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:13.217505 2109683 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:13.217684 2109683 main.go:141] libmachine: (ha-584983-m02) Calling .GetState
	I0819 12:19:13.219279 2109683 status.go:330] ha-584983-m02 host status = "Running" (err=<nil>)
	I0819 12:19:13.219299 2109683 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:19:13.219648 2109683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:13.219702 2109683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:13.236327 2109683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32937
	I0819 12:19:13.236796 2109683 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:13.237239 2109683 main.go:141] libmachine: Using API Version  1
	I0819 12:19:13.237261 2109683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:13.237566 2109683 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:13.237795 2109683 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:19:13.240657 2109683 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:13.241103 2109683 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:19:13.241130 2109683 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:13.241299 2109683 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:19:13.241598 2109683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:13.241633 2109683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:13.256411 2109683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42781
	I0819 12:19:13.256882 2109683 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:13.257368 2109683 main.go:141] libmachine: Using API Version  1
	I0819 12:19:13.257417 2109683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:13.257720 2109683 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:13.257873 2109683 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:19:13.258067 2109683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:13.258088 2109683 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:19:13.260587 2109683 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:13.261004 2109683 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:19:13.261029 2109683 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:13.261222 2109683 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:19:13.261402 2109683 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:19:13.261563 2109683 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:19:13.261684 2109683 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	W0819 12:19:16.332509 2109683 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.84:22: connect: no route to host
	W0819 12:19:16.332666 2109683 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	E0819 12:19:16.332692 2109683 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:19:16.332716 2109683 status.go:257] ha-584983-m02 status: &{Name:ha-584983-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 12:19:16.332743 2109683 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:19:16.332754 2109683 status.go:255] checking status of ha-584983-m03 ...
	I0819 12:19:16.333160 2109683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:16.333212 2109683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:16.348888 2109683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0819 12:19:16.349375 2109683 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:16.349857 2109683 main.go:141] libmachine: Using API Version  1
	I0819 12:19:16.349880 2109683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:16.350215 2109683 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:16.350394 2109683 main.go:141] libmachine: (ha-584983-m03) Calling .GetState
	I0819 12:19:16.352105 2109683 status.go:330] ha-584983-m03 host status = "Running" (err=<nil>)
	I0819 12:19:16.352126 2109683 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:19:16.352472 2109683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:16.352528 2109683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:16.367955 2109683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43813
	I0819 12:19:16.368452 2109683 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:16.368940 2109683 main.go:141] libmachine: Using API Version  1
	I0819 12:19:16.368963 2109683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:16.369274 2109683 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:16.369596 2109683 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:19:16.372351 2109683 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:16.372761 2109683 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:19:16.372787 2109683 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:16.372916 2109683 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:19:16.373216 2109683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:16.373249 2109683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:16.388229 2109683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37613
	I0819 12:19:16.388732 2109683 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:16.389211 2109683 main.go:141] libmachine: Using API Version  1
	I0819 12:19:16.389234 2109683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:16.389531 2109683 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:16.389729 2109683 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:19:16.389967 2109683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:16.389994 2109683 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:19:16.392626 2109683 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:16.392996 2109683 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:19:16.393031 2109683 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:16.393166 2109683 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:19:16.393342 2109683 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:19:16.393478 2109683 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:19:16.393615 2109683 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:19:16.475906 2109683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:16.492628 2109683 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:19:16.492657 2109683 api_server.go:166] Checking apiserver status ...
	I0819 12:19:16.492704 2109683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:19:16.509201 2109683 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup
	W0819 12:19:16.520558 2109683 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:19:16.520608 2109683 ssh_runner.go:195] Run: ls
	I0819 12:19:16.524772 2109683 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:19:16.530589 2109683 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:19:16.530614 2109683 status.go:422] ha-584983-m03 apiserver status = Running (err=<nil>)
	I0819 12:19:16.530626 2109683 status.go:257] ha-584983-m03 status: &{Name:ha-584983-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:19:16.530647 2109683 status.go:255] checking status of ha-584983-m04 ...
	I0819 12:19:16.531247 2109683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:16.531302 2109683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:16.547324 2109683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41583
	I0819 12:19:16.547758 2109683 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:16.548234 2109683 main.go:141] libmachine: Using API Version  1
	I0819 12:19:16.548262 2109683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:16.548538 2109683 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:16.548751 2109683 main.go:141] libmachine: (ha-584983-m04) Calling .GetState
	I0819 12:19:16.550161 2109683 status.go:330] ha-584983-m04 host status = "Running" (err=<nil>)
	I0819 12:19:16.550181 2109683 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:19:16.550472 2109683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:16.550530 2109683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:16.565122 2109683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44997
	I0819 12:19:16.565498 2109683 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:16.565915 2109683 main.go:141] libmachine: Using API Version  1
	I0819 12:19:16.565937 2109683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:16.566238 2109683 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:16.566400 2109683 main.go:141] libmachine: (ha-584983-m04) Calling .GetIP
	I0819 12:19:16.569024 2109683 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:16.569434 2109683 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:19:16.569465 2109683 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:16.569610 2109683 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:19:16.569884 2109683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:16.569919 2109683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:16.584742 2109683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I0819 12:19:16.585077 2109683 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:16.585519 2109683 main.go:141] libmachine: Using API Version  1
	I0819 12:19:16.585536 2109683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:16.585829 2109683 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:16.586025 2109683 main.go:141] libmachine: (ha-584983-m04) Calling .DriverName
	I0819 12:19:16.586216 2109683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:16.586237 2109683 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHHostname
	I0819 12:19:16.588857 2109683 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:16.589275 2109683 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:19:16.589313 2109683 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:16.589511 2109683 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHPort
	I0819 12:19:16.589689 2109683 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHKeyPath
	I0819 12:19:16.589868 2109683 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHUsername
	I0819 12:19:16.590032 2109683 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m04/id_rsa Username:docker}
	I0819 12:19:16.675448 2109683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:16.690681 2109683 status.go:257] ha-584983-m04 status: &{Name:ha-584983-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr: exit status 3 (3.721254426s)

                                                
                                                
-- stdout --
	ha-584983
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-584983-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:19:20.055987 2109799 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:19:20.056140 2109799 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:19:20.056151 2109799 out.go:358] Setting ErrFile to fd 2...
	I0819 12:19:20.056158 2109799 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:19:20.056352 2109799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:19:20.056544 2109799 out.go:352] Setting JSON to false
	I0819 12:19:20.056586 2109799 mustload.go:65] Loading cluster: ha-584983
	I0819 12:19:20.056678 2109799 notify.go:220] Checking for updates...
	I0819 12:19:20.056998 2109799 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:19:20.057017 2109799 status.go:255] checking status of ha-584983 ...
	I0819 12:19:20.057425 2109799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:20.057498 2109799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:20.074407 2109799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35485
	I0819 12:19:20.074890 2109799 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:20.075470 2109799 main.go:141] libmachine: Using API Version  1
	I0819 12:19:20.075500 2109799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:20.075878 2109799 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:20.076092 2109799 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:19:20.077731 2109799 status.go:330] ha-584983 host status = "Running" (err=<nil>)
	I0819 12:19:20.077746 2109799 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:19:20.078047 2109799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:20.078081 2109799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:20.093311 2109799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I0819 12:19:20.093783 2109799 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:20.094311 2109799 main.go:141] libmachine: Using API Version  1
	I0819 12:19:20.094334 2109799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:20.094757 2109799 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:20.095007 2109799 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:19:20.098029 2109799 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:20.098445 2109799 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:19:20.098475 2109799 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:20.098603 2109799 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:19:20.098903 2109799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:20.098940 2109799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:20.114603 2109799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41955
	I0819 12:19:20.115051 2109799 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:20.115537 2109799 main.go:141] libmachine: Using API Version  1
	I0819 12:19:20.115566 2109799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:20.115882 2109799 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:20.116059 2109799 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:19:20.116238 2109799 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:20.116266 2109799 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:19:20.118741 2109799 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:20.119173 2109799 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:19:20.119201 2109799 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:20.119399 2109799 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:19:20.119588 2109799 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:19:20.119736 2109799 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:19:20.119863 2109799 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:19:20.204255 2109799 ssh_runner.go:195] Run: systemctl --version
	I0819 12:19:20.210942 2109799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:20.226266 2109799 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:19:20.226314 2109799 api_server.go:166] Checking apiserver status ...
	I0819 12:19:20.226369 2109799 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:19:20.240612 2109799 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0819 12:19:20.249593 2109799 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:19:20.249660 2109799 ssh_runner.go:195] Run: ls
	I0819 12:19:20.254143 2109799 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:19:20.258804 2109799 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:19:20.258827 2109799 status.go:422] ha-584983 apiserver status = Running (err=<nil>)
	I0819 12:19:20.258839 2109799 status.go:257] ha-584983 status: &{Name:ha-584983 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:19:20.258861 2109799 status.go:255] checking status of ha-584983-m02 ...
	I0819 12:19:20.259272 2109799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:20.259341 2109799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:20.275249 2109799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0819 12:19:20.275695 2109799 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:20.276146 2109799 main.go:141] libmachine: Using API Version  1
	I0819 12:19:20.276180 2109799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:20.276528 2109799 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:20.276771 2109799 main.go:141] libmachine: (ha-584983-m02) Calling .GetState
	I0819 12:19:20.278250 2109799 status.go:330] ha-584983-m02 host status = "Running" (err=<nil>)
	I0819 12:19:20.278267 2109799 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:19:20.278551 2109799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:20.278594 2109799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:20.293989 2109799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0819 12:19:20.294470 2109799 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:20.295045 2109799 main.go:141] libmachine: Using API Version  1
	I0819 12:19:20.295089 2109799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:20.295467 2109799 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:20.295683 2109799 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:19:20.299180 2109799 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:20.299676 2109799 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:19:20.299699 2109799 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:20.299882 2109799 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:19:20.300320 2109799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:20.300367 2109799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:20.315974 2109799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I0819 12:19:20.316424 2109799 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:20.316879 2109799 main.go:141] libmachine: Using API Version  1
	I0819 12:19:20.316900 2109799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:20.317196 2109799 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:20.317406 2109799 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:19:20.317583 2109799 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:20.317607 2109799 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:19:20.320057 2109799 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:20.320455 2109799 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:19:20.320501 2109799 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:20.320711 2109799 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:19:20.320888 2109799 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:19:20.321050 2109799 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:19:20.321230 2109799 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	W0819 12:19:23.372417 2109799 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.84:22: connect: no route to host
	W0819 12:19:23.372537 2109799 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	E0819 12:19:23.372566 2109799 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:19:23.372577 2109799 status.go:257] ha-584983-m02 status: &{Name:ha-584983-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 12:19:23.372603 2109799 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:19:23.372617 2109799 status.go:255] checking status of ha-584983-m03 ...
	I0819 12:19:23.372970 2109799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:23.373020 2109799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:23.388396 2109799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41179
	I0819 12:19:23.388850 2109799 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:23.389352 2109799 main.go:141] libmachine: Using API Version  1
	I0819 12:19:23.389378 2109799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:23.389732 2109799 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:23.389939 2109799 main.go:141] libmachine: (ha-584983-m03) Calling .GetState
	I0819 12:19:23.391594 2109799 status.go:330] ha-584983-m03 host status = "Running" (err=<nil>)
	I0819 12:19:23.391610 2109799 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:19:23.391910 2109799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:23.391955 2109799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:23.407885 2109799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44429
	I0819 12:19:23.408314 2109799 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:23.408837 2109799 main.go:141] libmachine: Using API Version  1
	I0819 12:19:23.408864 2109799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:23.409162 2109799 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:23.409377 2109799 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:19:23.411770 2109799 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:23.412262 2109799 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:19:23.412291 2109799 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:23.412425 2109799 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:19:23.412754 2109799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:23.412798 2109799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:23.427935 2109799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45197
	I0819 12:19:23.428453 2109799 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:23.428986 2109799 main.go:141] libmachine: Using API Version  1
	I0819 12:19:23.429011 2109799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:23.429373 2109799 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:23.429581 2109799 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:19:23.429763 2109799 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:23.429784 2109799 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:19:23.432644 2109799 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:23.433053 2109799 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:19:23.433081 2109799 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:23.433248 2109799 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:19:23.433430 2109799 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:19:23.433593 2109799 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:19:23.433722 2109799 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:19:23.512660 2109799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:23.526780 2109799 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:19:23.526819 2109799 api_server.go:166] Checking apiserver status ...
	I0819 12:19:23.526863 2109799 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:19:23.540830 2109799 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup
	W0819 12:19:23.557410 2109799 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:19:23.557502 2109799 ssh_runner.go:195] Run: ls
	I0819 12:19:23.564959 2109799 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:19:23.570434 2109799 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:19:23.570463 2109799 status.go:422] ha-584983-m03 apiserver status = Running (err=<nil>)
	I0819 12:19:23.570472 2109799 status.go:257] ha-584983-m03 status: &{Name:ha-584983-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:19:23.570490 2109799 status.go:255] checking status of ha-584983-m04 ...
	I0819 12:19:23.570805 2109799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:23.570846 2109799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:23.586623 2109799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I0819 12:19:23.587140 2109799 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:23.587711 2109799 main.go:141] libmachine: Using API Version  1
	I0819 12:19:23.587737 2109799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:23.588081 2109799 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:23.588348 2109799 main.go:141] libmachine: (ha-584983-m04) Calling .GetState
	I0819 12:19:23.589994 2109799 status.go:330] ha-584983-m04 host status = "Running" (err=<nil>)
	I0819 12:19:23.590016 2109799 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:19:23.590334 2109799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:23.590381 2109799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:23.605897 2109799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I0819 12:19:23.606406 2109799 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:23.606840 2109799 main.go:141] libmachine: Using API Version  1
	I0819 12:19:23.606862 2109799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:23.607188 2109799 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:23.607374 2109799 main.go:141] libmachine: (ha-584983-m04) Calling .GetIP
	I0819 12:19:23.610108 2109799 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:23.610924 2109799 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:19:23.610965 2109799 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:23.611123 2109799 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:19:23.611547 2109799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:23.611597 2109799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:23.628364 2109799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35403
	I0819 12:19:23.628866 2109799 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:23.629385 2109799 main.go:141] libmachine: Using API Version  1
	I0819 12:19:23.629408 2109799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:23.629668 2109799 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:23.629856 2109799 main.go:141] libmachine: (ha-584983-m04) Calling .DriverName
	I0819 12:19:23.630074 2109799 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:23.630098 2109799 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHHostname
	I0819 12:19:23.632863 2109799 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:23.633288 2109799 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:19:23.633323 2109799 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:23.633476 2109799 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHPort
	I0819 12:19:23.633650 2109799 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHKeyPath
	I0819 12:19:23.633831 2109799 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHUsername
	I0819 12:19:23.633976 2109799 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m04/id_rsa Username:docker}
	I0819 12:19:23.715413 2109799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:23.729364 2109799 status.go:257] ha-584983-m04 status: &{Name:ha-584983-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr: exit status 3 (3.739213586s)

                                                
                                                
-- stdout --
	ha-584983
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-584983-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:19:29.050642 2109915 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:19:29.050891 2109915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:19:29.050899 2109915 out.go:358] Setting ErrFile to fd 2...
	I0819 12:19:29.050903 2109915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:19:29.051116 2109915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:19:29.051275 2109915 out.go:352] Setting JSON to false
	I0819 12:19:29.051303 2109915 mustload.go:65] Loading cluster: ha-584983
	I0819 12:19:29.051415 2109915 notify.go:220] Checking for updates...
	I0819 12:19:29.051706 2109915 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:19:29.051724 2109915 status.go:255] checking status of ha-584983 ...
	I0819 12:19:29.052232 2109915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:29.052304 2109915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:29.067915 2109915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33511
	I0819 12:19:29.068395 2109915 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:29.068998 2109915 main.go:141] libmachine: Using API Version  1
	I0819 12:19:29.069034 2109915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:29.069423 2109915 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:29.069601 2109915 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:19:29.071630 2109915 status.go:330] ha-584983 host status = "Running" (err=<nil>)
	I0819 12:19:29.071651 2109915 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:19:29.072043 2109915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:29.072091 2109915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:29.088717 2109915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46767
	I0819 12:19:29.089101 2109915 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:29.089730 2109915 main.go:141] libmachine: Using API Version  1
	I0819 12:19:29.089756 2109915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:29.090121 2109915 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:29.090402 2109915 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:19:29.093327 2109915 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:29.093834 2109915 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:19:29.093873 2109915 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:29.094019 2109915 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:19:29.094448 2109915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:29.094495 2109915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:29.109533 2109915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
	I0819 12:19:29.109958 2109915 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:29.110497 2109915 main.go:141] libmachine: Using API Version  1
	I0819 12:19:29.110530 2109915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:29.110858 2109915 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:29.111102 2109915 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:19:29.111335 2109915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:29.111364 2109915 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:19:29.114072 2109915 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:29.114572 2109915 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:19:29.114598 2109915 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:29.114737 2109915 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:19:29.114910 2109915 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:19:29.115050 2109915 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:19:29.115208 2109915 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:19:29.204910 2109915 ssh_runner.go:195] Run: systemctl --version
	I0819 12:19:29.211066 2109915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:29.226148 2109915 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:19:29.226184 2109915 api_server.go:166] Checking apiserver status ...
	I0819 12:19:29.226217 2109915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:19:29.239924 2109915 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0819 12:19:29.249405 2109915 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:19:29.249453 2109915 ssh_runner.go:195] Run: ls
	I0819 12:19:29.253978 2109915 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:19:29.258347 2109915 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:19:29.258366 2109915 status.go:422] ha-584983 apiserver status = Running (err=<nil>)
	I0819 12:19:29.258376 2109915 status.go:257] ha-584983 status: &{Name:ha-584983 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:19:29.258394 2109915 status.go:255] checking status of ha-584983-m02 ...
	I0819 12:19:29.258686 2109915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:29.258718 2109915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:29.274065 2109915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41163
	I0819 12:19:29.274497 2109915 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:29.275017 2109915 main.go:141] libmachine: Using API Version  1
	I0819 12:19:29.275045 2109915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:29.275450 2109915 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:29.275679 2109915 main.go:141] libmachine: (ha-584983-m02) Calling .GetState
	I0819 12:19:29.277486 2109915 status.go:330] ha-584983-m02 host status = "Running" (err=<nil>)
	I0819 12:19:29.277502 2109915 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:19:29.277818 2109915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:29.277854 2109915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:29.293578 2109915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0819 12:19:29.294047 2109915 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:29.294586 2109915 main.go:141] libmachine: Using API Version  1
	I0819 12:19:29.294614 2109915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:29.294969 2109915 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:29.295179 2109915 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:19:29.298309 2109915 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:29.298785 2109915 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:19:29.298807 2109915 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:29.298947 2109915 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:19:29.299304 2109915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:29.299354 2109915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:29.314470 2109915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41301
	I0819 12:19:29.314823 2109915 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:29.315316 2109915 main.go:141] libmachine: Using API Version  1
	I0819 12:19:29.315336 2109915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:29.315629 2109915 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:29.315820 2109915 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:19:29.316018 2109915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:29.316039 2109915 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:19:29.318466 2109915 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:29.318956 2109915 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:19:29.318985 2109915 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:19:29.319153 2109915 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:19:29.319327 2109915 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:19:29.319476 2109915 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:19:29.319585 2109915 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	W0819 12:19:32.396481 2109915 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.84:22: connect: no route to host
	W0819 12:19:32.396611 2109915 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	E0819 12:19:32.396655 2109915 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:19:32.396669 2109915 status.go:257] ha-584983-m02 status: &{Name:ha-584983-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 12:19:32.396698 2109915 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0819 12:19:32.396709 2109915 status.go:255] checking status of ha-584983-m03 ...
	I0819 12:19:32.397189 2109915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:32.397252 2109915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:32.413016 2109915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42419
	I0819 12:19:32.413493 2109915 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:32.414055 2109915 main.go:141] libmachine: Using API Version  1
	I0819 12:19:32.414086 2109915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:32.414425 2109915 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:32.414650 2109915 main.go:141] libmachine: (ha-584983-m03) Calling .GetState
	I0819 12:19:32.416532 2109915 status.go:330] ha-584983-m03 host status = "Running" (err=<nil>)
	I0819 12:19:32.416550 2109915 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:19:32.416856 2109915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:32.416896 2109915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:32.432019 2109915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35911
	I0819 12:19:32.432453 2109915 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:32.432950 2109915 main.go:141] libmachine: Using API Version  1
	I0819 12:19:32.432973 2109915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:32.433272 2109915 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:32.433476 2109915 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:19:32.436911 2109915 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:32.437384 2109915 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:19:32.437409 2109915 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:32.437652 2109915 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:19:32.437949 2109915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:32.437992 2109915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:32.453206 2109915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33853
	I0819 12:19:32.453613 2109915 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:32.454120 2109915 main.go:141] libmachine: Using API Version  1
	I0819 12:19:32.454144 2109915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:32.454462 2109915 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:32.454694 2109915 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:19:32.454887 2109915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:32.454908 2109915 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:19:32.457560 2109915 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:32.458088 2109915 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:19:32.458116 2109915 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:32.458295 2109915 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:19:32.458461 2109915 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:19:32.458642 2109915 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:19:32.458800 2109915 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:19:32.540198 2109915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:32.554694 2109915 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:19:32.554730 2109915 api_server.go:166] Checking apiserver status ...
	I0819 12:19:32.554774 2109915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:19:32.567648 2109915 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup
	W0819 12:19:32.576442 2109915 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:19:32.576483 2109915 ssh_runner.go:195] Run: ls
	I0819 12:19:32.580530 2109915 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:19:32.585199 2109915 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:19:32.585226 2109915 status.go:422] ha-584983-m03 apiserver status = Running (err=<nil>)
	I0819 12:19:32.585238 2109915 status.go:257] ha-584983-m03 status: &{Name:ha-584983-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:19:32.585261 2109915 status.go:255] checking status of ha-584983-m04 ...
	I0819 12:19:32.585710 2109915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:32.585759 2109915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:32.601949 2109915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37091
	I0819 12:19:32.602494 2109915 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:32.602962 2109915 main.go:141] libmachine: Using API Version  1
	I0819 12:19:32.602985 2109915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:32.603355 2109915 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:32.603613 2109915 main.go:141] libmachine: (ha-584983-m04) Calling .GetState
	I0819 12:19:32.605261 2109915 status.go:330] ha-584983-m04 host status = "Running" (err=<nil>)
	I0819 12:19:32.605278 2109915 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:19:32.605674 2109915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:32.605718 2109915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:32.620816 2109915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33949
	I0819 12:19:32.621360 2109915 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:32.621904 2109915 main.go:141] libmachine: Using API Version  1
	I0819 12:19:32.621925 2109915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:32.622240 2109915 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:32.622442 2109915 main.go:141] libmachine: (ha-584983-m04) Calling .GetIP
	I0819 12:19:32.625156 2109915 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:32.625701 2109915 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:19:32.625725 2109915 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:32.625874 2109915 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:19:32.626193 2109915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:32.626245 2109915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:32.640976 2109915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46609
	I0819 12:19:32.641383 2109915 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:32.641898 2109915 main.go:141] libmachine: Using API Version  1
	I0819 12:19:32.641923 2109915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:32.642352 2109915 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:32.642574 2109915 main.go:141] libmachine: (ha-584983-m04) Calling .DriverName
	I0819 12:19:32.642792 2109915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:32.642815 2109915 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHHostname
	I0819 12:19:32.645669 2109915 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:32.646114 2109915 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:19:32.646152 2109915 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:32.646313 2109915 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHPort
	I0819 12:19:32.646499 2109915 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHKeyPath
	I0819 12:19:32.646678 2109915 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHUsername
	I0819 12:19:32.646835 2109915 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m04/id_rsa Username:docker}
	I0819 12:19:32.728306 2109915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:32.742764 2109915 status.go:257] ha-584983-m04 status: &{Name:ha-584983-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr: exit status 7 (624.064671ms)

                                                
                                                
-- stdout --
	ha-584983
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-584983-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:19:43.177596 2110050 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:19:43.177857 2110050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:19:43.177867 2110050 out.go:358] Setting ErrFile to fd 2...
	I0819 12:19:43.177872 2110050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:19:43.178071 2110050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:19:43.178247 2110050 out.go:352] Setting JSON to false
	I0819 12:19:43.178275 2110050 mustload.go:65] Loading cluster: ha-584983
	I0819 12:19:43.178406 2110050 notify.go:220] Checking for updates...
	I0819 12:19:43.178826 2110050 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:19:43.178847 2110050 status.go:255] checking status of ha-584983 ...
	I0819 12:19:43.179399 2110050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:43.179483 2110050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:43.200292 2110050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39285
	I0819 12:19:43.200757 2110050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:43.201428 2110050 main.go:141] libmachine: Using API Version  1
	I0819 12:19:43.201471 2110050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:43.201907 2110050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:43.202152 2110050 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:19:43.203878 2110050 status.go:330] ha-584983 host status = "Running" (err=<nil>)
	I0819 12:19:43.203894 2110050 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:19:43.204208 2110050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:43.204252 2110050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:43.218856 2110050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I0819 12:19:43.219308 2110050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:43.219777 2110050 main.go:141] libmachine: Using API Version  1
	I0819 12:19:43.219801 2110050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:43.220122 2110050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:43.220321 2110050 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:19:43.223054 2110050 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:43.223478 2110050 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:19:43.223498 2110050 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:43.223623 2110050 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:19:43.223914 2110050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:43.223973 2110050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:43.239613 2110050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42443
	I0819 12:19:43.239977 2110050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:43.240484 2110050 main.go:141] libmachine: Using API Version  1
	I0819 12:19:43.240524 2110050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:43.240837 2110050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:43.240998 2110050 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:19:43.241115 2110050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:43.241145 2110050 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:19:43.243805 2110050 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:43.244262 2110050 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:19:43.244288 2110050 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:19:43.244428 2110050 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:19:43.244615 2110050 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:19:43.244772 2110050 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:19:43.244913 2110050 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:19:43.332908 2110050 ssh_runner.go:195] Run: systemctl --version
	I0819 12:19:43.339795 2110050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:43.356682 2110050 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:19:43.356730 2110050 api_server.go:166] Checking apiserver status ...
	I0819 12:19:43.356777 2110050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:19:43.370756 2110050 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0819 12:19:43.379962 2110050 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:19:43.380038 2110050 ssh_runner.go:195] Run: ls
	I0819 12:19:43.385231 2110050 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:19:43.389580 2110050 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:19:43.389607 2110050 status.go:422] ha-584983 apiserver status = Running (err=<nil>)
	I0819 12:19:43.389621 2110050 status.go:257] ha-584983 status: &{Name:ha-584983 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:19:43.389643 2110050 status.go:255] checking status of ha-584983-m02 ...
	I0819 12:19:43.389987 2110050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:43.390025 2110050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:43.405583 2110050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39395
	I0819 12:19:43.406110 2110050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:43.406636 2110050 main.go:141] libmachine: Using API Version  1
	I0819 12:19:43.406657 2110050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:43.407015 2110050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:43.407202 2110050 main.go:141] libmachine: (ha-584983-m02) Calling .GetState
	I0819 12:19:43.408778 2110050 status.go:330] ha-584983-m02 host status = "Stopped" (err=<nil>)
	I0819 12:19:43.408796 2110050 status.go:343] host is not running, skipping remaining checks
	I0819 12:19:43.408815 2110050 status.go:257] ha-584983-m02 status: &{Name:ha-584983-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:19:43.408843 2110050 status.go:255] checking status of ha-584983-m03 ...
	I0819 12:19:43.409167 2110050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:43.409210 2110050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:43.424721 2110050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44335
	I0819 12:19:43.425176 2110050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:43.425720 2110050 main.go:141] libmachine: Using API Version  1
	I0819 12:19:43.425745 2110050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:43.426129 2110050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:43.426348 2110050 main.go:141] libmachine: (ha-584983-m03) Calling .GetState
	I0819 12:19:43.428025 2110050 status.go:330] ha-584983-m03 host status = "Running" (err=<nil>)
	I0819 12:19:43.428045 2110050 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:19:43.428407 2110050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:43.428473 2110050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:43.443907 2110050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45051
	I0819 12:19:43.444381 2110050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:43.444833 2110050 main.go:141] libmachine: Using API Version  1
	I0819 12:19:43.444855 2110050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:43.445185 2110050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:43.445368 2110050 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:19:43.448082 2110050 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:43.448508 2110050 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:19:43.448533 2110050 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:43.448674 2110050 host.go:66] Checking if "ha-584983-m03" exists ...
	I0819 12:19:43.449033 2110050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:43.449074 2110050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:43.463675 2110050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I0819 12:19:43.464064 2110050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:43.464518 2110050 main.go:141] libmachine: Using API Version  1
	I0819 12:19:43.464539 2110050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:43.464860 2110050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:43.465056 2110050 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:19:43.465267 2110050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:43.465290 2110050 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:19:43.467868 2110050 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:43.468270 2110050 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:19:43.468294 2110050 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:43.468427 2110050 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:19:43.468594 2110050 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:19:43.468747 2110050 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:19:43.468904 2110050 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:19:43.549536 2110050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:43.565847 2110050 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:19:43.565878 2110050 api_server.go:166] Checking apiserver status ...
	I0819 12:19:43.565919 2110050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:19:43.578928 2110050 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup
	W0819 12:19:43.588460 2110050 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:19:43.588521 2110050 ssh_runner.go:195] Run: ls
	I0819 12:19:43.592868 2110050 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:19:43.597213 2110050 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:19:43.597250 2110050 status.go:422] ha-584983-m03 apiserver status = Running (err=<nil>)
	I0819 12:19:43.597268 2110050 status.go:257] ha-584983-m03 status: &{Name:ha-584983-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:19:43.597292 2110050 status.go:255] checking status of ha-584983-m04 ...
	I0819 12:19:43.597599 2110050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:43.597633 2110050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:43.612801 2110050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I0819 12:19:43.613304 2110050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:43.613811 2110050 main.go:141] libmachine: Using API Version  1
	I0819 12:19:43.613843 2110050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:43.614142 2110050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:43.614330 2110050 main.go:141] libmachine: (ha-584983-m04) Calling .GetState
	I0819 12:19:43.615976 2110050 status.go:330] ha-584983-m04 host status = "Running" (err=<nil>)
	I0819 12:19:43.615999 2110050 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:19:43.616356 2110050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:43.616394 2110050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:43.633303 2110050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35831
	I0819 12:19:43.633704 2110050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:43.634186 2110050 main.go:141] libmachine: Using API Version  1
	I0819 12:19:43.634209 2110050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:43.634533 2110050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:43.634718 2110050 main.go:141] libmachine: (ha-584983-m04) Calling .GetIP
	I0819 12:19:43.637851 2110050 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:43.638308 2110050 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:19:43.638331 2110050 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:43.638451 2110050 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:19:43.638762 2110050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:43.638798 2110050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:43.653879 2110050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0819 12:19:43.654333 2110050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:43.654812 2110050 main.go:141] libmachine: Using API Version  1
	I0819 12:19:43.654835 2110050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:43.655143 2110050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:43.655315 2110050 main.go:141] libmachine: (ha-584983-m04) Calling .DriverName
	I0819 12:19:43.655529 2110050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:43.655555 2110050 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHHostname
	I0819 12:19:43.657999 2110050 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:43.658411 2110050 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:19:43.658434 2110050 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:43.658601 2110050 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHPort
	I0819 12:19:43.658733 2110050 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHKeyPath
	I0819 12:19:43.658876 2110050 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHUsername
	I0819 12:19:43.658981 2110050 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m04/id_rsa Username:docker}
	I0819 12:19:43.739819 2110050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:19:43.754637 2110050 status.go:257] ha-584983-m04 status: &{Name:ha-584983-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-584983 -n ha-584983
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-584983 logs -n 25: (1.374336849s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m03:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983:/home/docker/cp-test_ha-584983-m03_ha-584983.txt                       |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983 sudo cat                                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m03_ha-584983.txt                                 |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m03:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m02:/home/docker/cp-test_ha-584983-m03_ha-584983-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983-m02 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m03_ha-584983-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m03:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04:/home/docker/cp-test_ha-584983-m03_ha-584983-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983-m04 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m03_ha-584983-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-584983 cp testdata/cp-test.txt                                                | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3290211963/001/cp-test_ha-584983-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983:/home/docker/cp-test_ha-584983-m04_ha-584983.txt                       |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983 sudo cat                                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m04_ha-584983.txt                                 |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m02:/home/docker/cp-test_ha-584983-m04_ha-584983-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983-m02 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m04_ha-584983-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m03:/home/docker/cp-test_ha-584983-m04_ha-584983-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983-m03 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m04_ha-584983-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-584983 node stop m02 -v=7                                                     | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-584983 node start m02 -v=7                                                    | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:12:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:12:06.992137 2104204 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:12:06.992436 2104204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:12:06.992446 2104204 out.go:358] Setting ErrFile to fd 2...
	I0819 12:12:06.992450 2104204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:12:06.992597 2104204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:12:06.993153 2104204 out.go:352] Setting JSON to false
	I0819 12:12:06.994070 2104204 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":161677,"bootTime":1723907850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:12:06.994130 2104204 start.go:139] virtualization: kvm guest
	I0819 12:12:06.996118 2104204 out.go:177] * [ha-584983] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:12:06.997456 2104204 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 12:12:06.997502 2104204 notify.go:220] Checking for updates...
	I0819 12:12:06.999710 2104204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:12:07.000781 2104204 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:12:07.001857 2104204 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:12:07.002961 2104204 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:12:07.004066 2104204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:12:07.005400 2104204 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:12:07.040832 2104204 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 12:12:07.041861 2104204 start.go:297] selected driver: kvm2
	I0819 12:12:07.041879 2104204 start.go:901] validating driver "kvm2" against <nil>
	I0819 12:12:07.041891 2104204 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:12:07.042563 2104204 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:12:07.042641 2104204 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 12:12:07.058177 2104204 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 12:12:07.058230 2104204 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 12:12:07.058454 2104204 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:12:07.058487 2104204 cni.go:84] Creating CNI manager for ""
	I0819 12:12:07.058497 2104204 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 12:12:07.058502 2104204 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 12:12:07.058551 2104204 start.go:340] cluster config:
	{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 12:12:07.058667 2104204 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:12:07.060465 2104204 out.go:177] * Starting "ha-584983" primary control-plane node in "ha-584983" cluster
	I0819 12:12:07.061680 2104204 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:12:07.061729 2104204 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 12:12:07.061744 2104204 cache.go:56] Caching tarball of preloaded images
	I0819 12:12:07.061902 2104204 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:12:07.061922 2104204 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 12:12:07.062389 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:12:07.062422 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json: {Name:mk6e05058afcdcc1e7287821cd2e620cfeeb5720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:07.062609 2104204 start.go:360] acquireMachinesLock for ha-584983: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:12:07.062650 2104204 start.go:364] duration metric: took 24.193µs to acquireMachinesLock for "ha-584983"
	I0819 12:12:07.062672 2104204 start.go:93] Provisioning new machine with config: &{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:12:07.062735 2104204 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 12:12:07.065021 2104204 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 12:12:07.065200 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:12:07.065247 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:12:07.080779 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45945
	I0819 12:12:07.081225 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:12:07.081879 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:12:07.081900 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:12:07.082225 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:12:07.082428 2104204 main.go:141] libmachine: (ha-584983) Calling .GetMachineName
	I0819 12:12:07.082565 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:07.082736 2104204 start.go:159] libmachine.API.Create for "ha-584983" (driver="kvm2")
	I0819 12:12:07.082771 2104204 client.go:168] LocalClient.Create starting
	I0819 12:12:07.082812 2104204 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem
	I0819 12:12:07.082856 2104204 main.go:141] libmachine: Decoding PEM data...
	I0819 12:12:07.082873 2104204 main.go:141] libmachine: Parsing certificate...
	I0819 12:12:07.082931 2104204 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem
	I0819 12:12:07.082951 2104204 main.go:141] libmachine: Decoding PEM data...
	I0819 12:12:07.082961 2104204 main.go:141] libmachine: Parsing certificate...
	I0819 12:12:07.082978 2104204 main.go:141] libmachine: Running pre-create checks...
	I0819 12:12:07.082991 2104204 main.go:141] libmachine: (ha-584983) Calling .PreCreateCheck
	I0819 12:12:07.083300 2104204 main.go:141] libmachine: (ha-584983) Calling .GetConfigRaw
	I0819 12:12:07.083711 2104204 main.go:141] libmachine: Creating machine...
	I0819 12:12:07.083728 2104204 main.go:141] libmachine: (ha-584983) Calling .Create
	I0819 12:12:07.083850 2104204 main.go:141] libmachine: (ha-584983) Creating KVM machine...
	I0819 12:12:07.085716 2104204 main.go:141] libmachine: (ha-584983) DBG | found existing default KVM network
	I0819 12:12:07.086383 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:07.086263 2104227 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d990}
	I0819 12:12:07.086448 2104204 main.go:141] libmachine: (ha-584983) DBG | created network xml: 
	I0819 12:12:07.086477 2104204 main.go:141] libmachine: (ha-584983) DBG | <network>
	I0819 12:12:07.086485 2104204 main.go:141] libmachine: (ha-584983) DBG |   <name>mk-ha-584983</name>
	I0819 12:12:07.086490 2104204 main.go:141] libmachine: (ha-584983) DBG |   <dns enable='no'/>
	I0819 12:12:07.086496 2104204 main.go:141] libmachine: (ha-584983) DBG |   
	I0819 12:12:07.086502 2104204 main.go:141] libmachine: (ha-584983) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 12:12:07.086510 2104204 main.go:141] libmachine: (ha-584983) DBG |     <dhcp>
	I0819 12:12:07.086516 2104204 main.go:141] libmachine: (ha-584983) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 12:12:07.086524 2104204 main.go:141] libmachine: (ha-584983) DBG |     </dhcp>
	I0819 12:12:07.086530 2104204 main.go:141] libmachine: (ha-584983) DBG |   </ip>
	I0819 12:12:07.086541 2104204 main.go:141] libmachine: (ha-584983) DBG |   
	I0819 12:12:07.086555 2104204 main.go:141] libmachine: (ha-584983) DBG | </network>
	I0819 12:12:07.086571 2104204 main.go:141] libmachine: (ha-584983) DBG | 
	I0819 12:12:07.091516 2104204 main.go:141] libmachine: (ha-584983) DBG | trying to create private KVM network mk-ha-584983 192.168.39.0/24...
	I0819 12:12:07.165841 2104204 main.go:141] libmachine: (ha-584983) DBG | private KVM network mk-ha-584983 192.168.39.0/24 created
	I0819 12:12:07.165909 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:07.165790 2104227 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:12:07.165926 2104204 main.go:141] libmachine: (ha-584983) Setting up store path in /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983 ...
	I0819 12:12:07.165961 2104204 main.go:141] libmachine: (ha-584983) Building disk image from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 12:12:07.165980 2104204 main.go:141] libmachine: (ha-584983) Downloading /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 12:12:07.445397 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:07.445236 2104227 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa...
	I0819 12:12:07.630664 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:07.630541 2104227 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/ha-584983.rawdisk...
	I0819 12:12:07.630693 2104204 main.go:141] libmachine: (ha-584983) DBG | Writing magic tar header
	I0819 12:12:07.630704 2104204 main.go:141] libmachine: (ha-584983) DBG | Writing SSH key tar header
	I0819 12:12:07.630722 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:07.630658 2104227 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983 ...
	I0819 12:12:07.630734 2104204 main.go:141] libmachine: (ha-584983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983
	I0819 12:12:07.630805 2104204 main.go:141] libmachine: (ha-584983) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983 (perms=drwx------)
	I0819 12:12:07.630837 2104204 main.go:141] libmachine: (ha-584983) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines (perms=drwxr-xr-x)
	I0819 12:12:07.630848 2104204 main.go:141] libmachine: (ha-584983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines
	I0819 12:12:07.630858 2104204 main.go:141] libmachine: (ha-584983) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube (perms=drwxr-xr-x)
	I0819 12:12:07.630868 2104204 main.go:141] libmachine: (ha-584983) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935 (perms=drwxrwxr-x)
	I0819 12:12:07.630892 2104204 main.go:141] libmachine: (ha-584983) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 12:12:07.630905 2104204 main.go:141] libmachine: (ha-584983) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 12:12:07.630916 2104204 main.go:141] libmachine: (ha-584983) Creating domain...
	I0819 12:12:07.630926 2104204 main.go:141] libmachine: (ha-584983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:12:07.630943 2104204 main.go:141] libmachine: (ha-584983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935
	I0819 12:12:07.630951 2104204 main.go:141] libmachine: (ha-584983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 12:12:07.630957 2104204 main.go:141] libmachine: (ha-584983) DBG | Checking permissions on dir: /home/jenkins
	I0819 12:12:07.630964 2104204 main.go:141] libmachine: (ha-584983) DBG | Checking permissions on dir: /home
	I0819 12:12:07.630972 2104204 main.go:141] libmachine: (ha-584983) DBG | Skipping /home - not owner
	I0819 12:12:07.632124 2104204 main.go:141] libmachine: (ha-584983) define libvirt domain using xml: 
	I0819 12:12:07.632157 2104204 main.go:141] libmachine: (ha-584983) <domain type='kvm'>
	I0819 12:12:07.632179 2104204 main.go:141] libmachine: (ha-584983)   <name>ha-584983</name>
	I0819 12:12:07.632194 2104204 main.go:141] libmachine: (ha-584983)   <memory unit='MiB'>2200</memory>
	I0819 12:12:07.632202 2104204 main.go:141] libmachine: (ha-584983)   <vcpu>2</vcpu>
	I0819 12:12:07.632210 2104204 main.go:141] libmachine: (ha-584983)   <features>
	I0819 12:12:07.632218 2104204 main.go:141] libmachine: (ha-584983)     <acpi/>
	I0819 12:12:07.632223 2104204 main.go:141] libmachine: (ha-584983)     <apic/>
	I0819 12:12:07.632228 2104204 main.go:141] libmachine: (ha-584983)     <pae/>
	I0819 12:12:07.632239 2104204 main.go:141] libmachine: (ha-584983)     
	I0819 12:12:07.632248 2104204 main.go:141] libmachine: (ha-584983)   </features>
	I0819 12:12:07.632254 2104204 main.go:141] libmachine: (ha-584983)   <cpu mode='host-passthrough'>
	I0819 12:12:07.632262 2104204 main.go:141] libmachine: (ha-584983)   
	I0819 12:12:07.632273 2104204 main.go:141] libmachine: (ha-584983)   </cpu>
	I0819 12:12:07.632282 2104204 main.go:141] libmachine: (ha-584983)   <os>
	I0819 12:12:07.632289 2104204 main.go:141] libmachine: (ha-584983)     <type>hvm</type>
	I0819 12:12:07.632301 2104204 main.go:141] libmachine: (ha-584983)     <boot dev='cdrom'/>
	I0819 12:12:07.632309 2104204 main.go:141] libmachine: (ha-584983)     <boot dev='hd'/>
	I0819 12:12:07.632317 2104204 main.go:141] libmachine: (ha-584983)     <bootmenu enable='no'/>
	I0819 12:12:07.632325 2104204 main.go:141] libmachine: (ha-584983)   </os>
	I0819 12:12:07.632330 2104204 main.go:141] libmachine: (ha-584983)   <devices>
	I0819 12:12:07.632338 2104204 main.go:141] libmachine: (ha-584983)     <disk type='file' device='cdrom'>
	I0819 12:12:07.632370 2104204 main.go:141] libmachine: (ha-584983)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/boot2docker.iso'/>
	I0819 12:12:07.632395 2104204 main.go:141] libmachine: (ha-584983)       <target dev='hdc' bus='scsi'/>
	I0819 12:12:07.632406 2104204 main.go:141] libmachine: (ha-584983)       <readonly/>
	I0819 12:12:07.632415 2104204 main.go:141] libmachine: (ha-584983)     </disk>
	I0819 12:12:07.632427 2104204 main.go:141] libmachine: (ha-584983)     <disk type='file' device='disk'>
	I0819 12:12:07.632440 2104204 main.go:141] libmachine: (ha-584983)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 12:12:07.632457 2104204 main.go:141] libmachine: (ha-584983)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/ha-584983.rawdisk'/>
	I0819 12:12:07.632472 2104204 main.go:141] libmachine: (ha-584983)       <target dev='hda' bus='virtio'/>
	I0819 12:12:07.632484 2104204 main.go:141] libmachine: (ha-584983)     </disk>
	I0819 12:12:07.632514 2104204 main.go:141] libmachine: (ha-584983)     <interface type='network'>
	I0819 12:12:07.632528 2104204 main.go:141] libmachine: (ha-584983)       <source network='mk-ha-584983'/>
	I0819 12:12:07.632537 2104204 main.go:141] libmachine: (ha-584983)       <model type='virtio'/>
	I0819 12:12:07.632565 2104204 main.go:141] libmachine: (ha-584983)     </interface>
	I0819 12:12:07.632577 2104204 main.go:141] libmachine: (ha-584983)     <interface type='network'>
	I0819 12:12:07.632642 2104204 main.go:141] libmachine: (ha-584983)       <source network='default'/>
	I0819 12:12:07.632669 2104204 main.go:141] libmachine: (ha-584983)       <model type='virtio'/>
	I0819 12:12:07.632682 2104204 main.go:141] libmachine: (ha-584983)     </interface>
	I0819 12:12:07.632692 2104204 main.go:141] libmachine: (ha-584983)     <serial type='pty'>
	I0819 12:12:07.632704 2104204 main.go:141] libmachine: (ha-584983)       <target port='0'/>
	I0819 12:12:07.632713 2104204 main.go:141] libmachine: (ha-584983)     </serial>
	I0819 12:12:07.632725 2104204 main.go:141] libmachine: (ha-584983)     <console type='pty'>
	I0819 12:12:07.632742 2104204 main.go:141] libmachine: (ha-584983)       <target type='serial' port='0'/>
	I0819 12:12:07.632753 2104204 main.go:141] libmachine: (ha-584983)     </console>
	I0819 12:12:07.632763 2104204 main.go:141] libmachine: (ha-584983)     <rng model='virtio'>
	I0819 12:12:07.632773 2104204 main.go:141] libmachine: (ha-584983)       <backend model='random'>/dev/random</backend>
	I0819 12:12:07.632783 2104204 main.go:141] libmachine: (ha-584983)     </rng>
	I0819 12:12:07.632792 2104204 main.go:141] libmachine: (ha-584983)     
	I0819 12:12:07.632799 2104204 main.go:141] libmachine: (ha-584983)     
	I0819 12:12:07.632828 2104204 main.go:141] libmachine: (ha-584983)   </devices>
	I0819 12:12:07.632853 2104204 main.go:141] libmachine: (ha-584983) </domain>
	I0819 12:12:07.632880 2104204 main.go:141] libmachine: (ha-584983) 
	I0819 12:12:07.636985 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:f0:e6:49 in network default
	I0819 12:12:07.637584 2104204 main.go:141] libmachine: (ha-584983) Ensuring networks are active...
	I0819 12:12:07.637606 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:07.638343 2104204 main.go:141] libmachine: (ha-584983) Ensuring network default is active
	I0819 12:12:07.638624 2104204 main.go:141] libmachine: (ha-584983) Ensuring network mk-ha-584983 is active
	I0819 12:12:07.639038 2104204 main.go:141] libmachine: (ha-584983) Getting domain xml...
	I0819 12:12:07.639762 2104204 main.go:141] libmachine: (ha-584983) Creating domain...
	I0819 12:12:08.833673 2104204 main.go:141] libmachine: (ha-584983) Waiting to get IP...
	I0819 12:12:08.834563 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:08.834940 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:08.834984 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:08.834933 2104227 retry.go:31] will retry after 248.222805ms: waiting for machine to come up
	I0819 12:12:09.084568 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:09.085027 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:09.085059 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:09.084975 2104227 retry.go:31] will retry after 317.039873ms: waiting for machine to come up
	I0819 12:12:09.403400 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:09.403888 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:09.403912 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:09.403836 2104227 retry.go:31] will retry after 411.043815ms: waiting for machine to come up
	I0819 12:12:09.816372 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:09.816783 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:09.816839 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:09.816734 2104227 retry.go:31] will retry after 479.128255ms: waiting for machine to come up
	I0819 12:12:10.297403 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:10.297847 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:10.297871 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:10.297796 2104227 retry.go:31] will retry after 639.683545ms: waiting for machine to come up
	I0819 12:12:10.938598 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:10.939036 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:10.939060 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:10.938987 2104227 retry.go:31] will retry after 932.901833ms: waiting for machine to come up
	I0819 12:12:11.873122 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:11.873585 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:11.873636 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:11.873533 2104227 retry.go:31] will retry after 1.067845741s: waiting for machine to come up
	I0819 12:12:12.943413 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:12.943841 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:12.943887 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:12.943813 2104227 retry.go:31] will retry after 1.213836701s: waiting for machine to come up
	I0819 12:12:14.159270 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:14.159638 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:14.159677 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:14.159607 2104227 retry.go:31] will retry after 1.173001156s: waiting for machine to come up
	I0819 12:12:15.333975 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:15.334376 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:15.334400 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:15.334337 2104227 retry.go:31] will retry after 1.790531734s: waiting for machine to come up
	I0819 12:12:17.126466 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:17.126874 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:17.126899 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:17.126828 2104227 retry.go:31] will retry after 2.177399011s: waiting for machine to come up
	I0819 12:12:19.307163 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:19.307480 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:19.307509 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:19.307433 2104227 retry.go:31] will retry after 3.635463931s: waiting for machine to come up
	I0819 12:12:22.944987 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:22.945311 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find current IP address of domain ha-584983 in network mk-ha-584983
	I0819 12:12:22.945346 2104204 main.go:141] libmachine: (ha-584983) DBG | I0819 12:12:22.945247 2104227 retry.go:31] will retry after 4.489328535s: waiting for machine to come up
	I0819 12:12:27.435618 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.436062 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has current primary IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.436087 2104204 main.go:141] libmachine: (ha-584983) Found IP for machine: 192.168.39.12
	I0819 12:12:27.436100 2104204 main.go:141] libmachine: (ha-584983) Reserving static IP address...
	I0819 12:12:27.436457 2104204 main.go:141] libmachine: (ha-584983) DBG | unable to find host DHCP lease matching {name: "ha-584983", mac: "52:54:00:be:50:ea", ip: "192.168.39.12"} in network mk-ha-584983
	I0819 12:12:27.516487 2104204 main.go:141] libmachine: (ha-584983) DBG | Getting to WaitForSSH function...
	I0819 12:12:27.516524 2104204 main.go:141] libmachine: (ha-584983) Reserved static IP address: 192.168.39.12
	I0819 12:12:27.516541 2104204 main.go:141] libmachine: (ha-584983) Waiting for SSH to be available...
	I0819 12:12:27.518982 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.519444 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:27.519534 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.519580 2104204 main.go:141] libmachine: (ha-584983) DBG | Using SSH client type: external
	I0819 12:12:27.519611 2104204 main.go:141] libmachine: (ha-584983) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa (-rw-------)
	I0819 12:12:27.519653 2104204 main.go:141] libmachine: (ha-584983) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.12 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 12:12:27.519674 2104204 main.go:141] libmachine: (ha-584983) DBG | About to run SSH command:
	I0819 12:12:27.519688 2104204 main.go:141] libmachine: (ha-584983) DBG | exit 0
	I0819 12:12:27.648019 2104204 main.go:141] libmachine: (ha-584983) DBG | SSH cmd err, output: <nil>: 
	I0819 12:12:27.648244 2104204 main.go:141] libmachine: (ha-584983) KVM machine creation complete!
	I0819 12:12:27.648590 2104204 main.go:141] libmachine: (ha-584983) Calling .GetConfigRaw
	I0819 12:12:27.649208 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:27.649421 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:27.649574 2104204 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 12:12:27.649592 2104204 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:12:27.650845 2104204 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 12:12:27.650862 2104204 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 12:12:27.650868 2104204 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 12:12:27.650874 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:27.653255 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.653623 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:27.653646 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.653780 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:27.653959 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:27.654110 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:27.654232 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:27.654407 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:12:27.654661 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:12:27.654680 2104204 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 12:12:27.763257 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:12:27.763287 2104204 main.go:141] libmachine: Detecting the provisioner...
	I0819 12:12:27.763299 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:27.765800 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.766163 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:27.766192 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.766304 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:27.766520 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:27.766710 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:27.766819 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:27.766996 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:12:27.767200 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:12:27.767219 2104204 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 12:12:27.877123 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 12:12:27.877255 2104204 main.go:141] libmachine: found compatible host: buildroot
	I0819 12:12:27.877271 2104204 main.go:141] libmachine: Provisioning with buildroot...
	I0819 12:12:27.877282 2104204 main.go:141] libmachine: (ha-584983) Calling .GetMachineName
	I0819 12:12:27.877558 2104204 buildroot.go:166] provisioning hostname "ha-584983"
	I0819 12:12:27.877584 2104204 main.go:141] libmachine: (ha-584983) Calling .GetMachineName
	I0819 12:12:27.877777 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:27.880426 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.880778 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:27.880808 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:27.880930 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:27.881120 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:27.881235 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:27.881399 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:27.881566 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:12:27.881733 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:12:27.881744 2104204 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-584983 && echo "ha-584983" | sudo tee /etc/hostname
	I0819 12:12:28.006185 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-584983
	
	I0819 12:12:28.006215 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.009165 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.009494 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.009523 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.009683 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:28.009906 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.010046 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.010208 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:28.010362 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:12:28.010578 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:12:28.010595 2104204 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-584983' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-584983/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-584983' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:12:28.129393 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:12:28.129429 2104204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 12:12:28.129452 2104204 buildroot.go:174] setting up certificates
	I0819 12:12:28.129468 2104204 provision.go:84] configureAuth start
	I0819 12:12:28.129481 2104204 main.go:141] libmachine: (ha-584983) Calling .GetMachineName
	I0819 12:12:28.129854 2104204 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:12:28.132834 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.133216 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.133246 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.133376 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.135812 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.136109 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.136179 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.136279 2104204 provision.go:143] copyHostCerts
	I0819 12:12:28.136316 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:12:28.136363 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 12:12:28.136392 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:12:28.136474 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 12:12:28.136606 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:12:28.136646 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 12:12:28.136656 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:12:28.136699 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 12:12:28.136777 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:12:28.136801 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 12:12:28.136808 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:12:28.136843 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 12:12:28.136928 2104204 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.ha-584983 san=[127.0.0.1 192.168.39.12 ha-584983 localhost minikube]
	I0819 12:12:28.201240 2104204 provision.go:177] copyRemoteCerts
	I0819 12:12:28.201319 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:12:28.201349 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.204320 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.204670 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.204712 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.204846 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:28.205067 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.205216 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:28.205320 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:12:28.290311 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 12:12:28.290389 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 12:12:28.313988 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 12:12:28.314062 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 12:12:28.337418 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 12:12:28.337488 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 12:12:28.360055 2104204 provision.go:87] duration metric: took 230.571098ms to configureAuth
	I0819 12:12:28.360086 2104204 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:12:28.360334 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:12:28.360451 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.363133 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.363508 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.363540 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.363701 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:28.363906 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.364089 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.364254 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:28.364416 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:12:28.364615 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:12:28.364629 2104204 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:12:28.637654 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:12:28.637686 2104204 main.go:141] libmachine: Checking connection to Docker...
	I0819 12:12:28.637697 2104204 main.go:141] libmachine: (ha-584983) Calling .GetURL
	I0819 12:12:28.639100 2104204 main.go:141] libmachine: (ha-584983) DBG | Using libvirt version 6000000
	I0819 12:12:28.641202 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.641520 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.641545 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.641761 2104204 main.go:141] libmachine: Docker is up and running!
	I0819 12:12:28.641777 2104204 main.go:141] libmachine: Reticulating splines...
	I0819 12:12:28.641785 2104204 client.go:171] duration metric: took 21.559002125s to LocalClient.Create
	I0819 12:12:28.641807 2104204 start.go:167] duration metric: took 21.559075213s to libmachine.API.Create "ha-584983"
	I0819 12:12:28.641816 2104204 start.go:293] postStartSetup for "ha-584983" (driver="kvm2")
	I0819 12:12:28.641825 2104204 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:12:28.641842 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:28.642076 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:12:28.642101 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.644100 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.644418 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.644444 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.644599 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:28.644796 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.644962 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:28.645085 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:12:28.730178 2104204 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:12:28.734149 2104204 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:12:28.734186 2104204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 12:12:28.734251 2104204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 12:12:28.734345 2104204 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 12:12:28.734357 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /etc/ssl/certs/20941932.pem
	I0819 12:12:28.734496 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:12:28.743850 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:12:28.766088 2104204 start.go:296] duration metric: took 124.26097ms for postStartSetup
	I0819 12:12:28.766136 2104204 main.go:141] libmachine: (ha-584983) Calling .GetConfigRaw
	I0819 12:12:28.766751 2104204 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:12:28.769288 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.769618 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.769652 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.769890 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:12:28.770080 2104204 start.go:128] duration metric: took 21.707330197s to createHost
	I0819 12:12:28.770103 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.772135 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.772486 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.772515 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.772646 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:28.772834 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.773024 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.773155 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:28.773324 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:12:28.773491 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:12:28.773501 2104204 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:12:28.884734 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724069548.858637417
	
	I0819 12:12:28.884764 2104204 fix.go:216] guest clock: 1724069548.858637417
	I0819 12:12:28.884772 2104204 fix.go:229] Guest: 2024-08-19 12:12:28.858637417 +0000 UTC Remote: 2024-08-19 12:12:28.770092271 +0000 UTC m=+21.812458134 (delta=88.545146ms)
	I0819 12:12:28.884793 2104204 fix.go:200] guest clock delta is within tolerance: 88.545146ms
	I0819 12:12:28.884799 2104204 start.go:83] releasing machines lock for "ha-584983", held for 21.822138177s
	I0819 12:12:28.884819 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:28.885096 2104204 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:12:28.887707 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.888032 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.888060 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.888208 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:28.888685 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:28.888861 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:28.888970 2104204 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:12:28.889006 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.889141 2104204 ssh_runner.go:195] Run: cat /version.json
	I0819 12:12:28.889181 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:28.891594 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.891844 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.891871 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.891907 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.892104 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:28.892255 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:28.892275 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:28.892294 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.892438 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:28.892454 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:28.892582 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:12:28.892636 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:28.892769 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:28.892888 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:12:28.997153 2104204 ssh_runner.go:195] Run: systemctl --version
	I0819 12:12:29.002917 2104204 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:12:29.156072 2104204 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:12:29.162017 2104204 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:12:29.162086 2104204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:12:29.177501 2104204 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 12:12:29.177525 2104204 start.go:495] detecting cgroup driver to use...
	I0819 12:12:29.177584 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:12:29.193283 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:12:29.207112 2104204 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:12:29.207170 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:12:29.220108 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:12:29.233119 2104204 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:12:29.341287 2104204 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:12:29.490039 2104204 docker.go:233] disabling docker service ...
	I0819 12:12:29.490111 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:12:29.505142 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:12:29.517330 2104204 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:12:29.649463 2104204 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:12:29.771780 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:12:29.785363 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:12:29.803502 2104204 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:12:29.803567 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:12:29.813984 2104204 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:12:29.814088 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:12:29.824448 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:12:29.834691 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:12:29.845044 2104204 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:12:29.855416 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:12:29.865544 2104204 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:12:29.881836 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:12:29.891658 2104204 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:12:29.900933 2104204 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 12:12:29.900975 2104204 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 12:12:29.913701 2104204 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:12:29.922823 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:12:30.045585 2104204 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:12:30.170840 2104204 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:12:30.170911 2104204 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:12:30.175636 2104204 start.go:563] Will wait 60s for crictl version
	I0819 12:12:30.175706 2104204 ssh_runner.go:195] Run: which crictl
	I0819 12:12:30.179323 2104204 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:12:30.222715 2104204 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:12:30.222815 2104204 ssh_runner.go:195] Run: crio --version
	I0819 12:12:30.249893 2104204 ssh_runner.go:195] Run: crio --version
	I0819 12:12:30.278636 2104204 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:12:30.280057 2104204 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:12:30.282979 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:30.283313 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:30.283340 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:30.283535 2104204 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:12:30.287799 2104204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:12:30.300616 2104204 kubeadm.go:883] updating cluster {Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:12:30.300779 2104204 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:12:30.300844 2104204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:12:30.334470 2104204 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 12:12:30.334557 2104204 ssh_runner.go:195] Run: which lz4
	I0819 12:12:30.338591 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 12:12:30.338684 2104204 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 12:12:30.342667 2104204 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 12:12:30.342699 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 12:12:31.657563 2104204 crio.go:462] duration metric: took 1.318896291s to copy over tarball
	I0819 12:12:31.657659 2104204 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 12:12:33.637692 2104204 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.979997464s)
	I0819 12:12:33.637727 2104204 crio.go:469] duration metric: took 1.980129197s to extract the tarball
	I0819 12:12:33.637735 2104204 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 12:12:33.674912 2104204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:12:33.718314 2104204 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:12:33.718346 2104204 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:12:33.718356 2104204 kubeadm.go:934] updating node { 192.168.39.12 8443 v1.31.0 crio true true} ...
	I0819 12:12:33.718508 2104204 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-584983 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:12:33.718601 2104204 ssh_runner.go:195] Run: crio config
	I0819 12:12:33.768097 2104204 cni.go:84] Creating CNI manager for ""
	I0819 12:12:33.768130 2104204 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 12:12:33.768141 2104204 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:12:33.768189 2104204 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-584983 NodeName:ha-584983 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:12:33.768353 2104204 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-584983"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:12:33.768382 2104204 kube-vip.go:115] generating kube-vip config ...
	I0819 12:12:33.768437 2104204 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 12:12:33.786436 2104204 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 12:12:33.786552 2104204 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 12:12:33.786622 2104204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:12:33.796413 2104204 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:12:33.796493 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 12:12:33.805662 2104204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0819 12:12:33.821342 2104204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:12:33.836923 2104204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0819 12:12:33.852195 2104204 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0819 12:12:33.867701 2104204 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 12:12:33.871431 2104204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:12:33.883043 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:12:34.004866 2104204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:12:34.021983 2104204 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983 for IP: 192.168.39.12
	I0819 12:12:34.022017 2104204 certs.go:194] generating shared ca certs ...
	I0819 12:12:34.022040 2104204 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:34.022257 2104204 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 12:12:34.022325 2104204 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 12:12:34.022343 2104204 certs.go:256] generating profile certs ...
	I0819 12:12:34.022429 2104204 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key
	I0819 12:12:34.022450 2104204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.crt with IP's: []
	I0819 12:12:34.174427 2104204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.crt ...
	I0819 12:12:34.174457 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.crt: {Name:mk0d5583e764aa292baa802da2b1f9061f76f122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:34.174655 2104204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key ...
	I0819 12:12:34.174670 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key: {Name:mk0927d142c6782b7ebd5507529b65dfa62b7b72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:34.174780 2104204 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.b3cd3da7
	I0819 12:12:34.174805 2104204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.b3cd3da7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.254]
	I0819 12:12:34.336324 2104204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.b3cd3da7 ...
	I0819 12:12:34.336362 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.b3cd3da7: {Name:mka4fa0f2524582fec5a532ef8eb39a00889f2b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:34.336579 2104204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.b3cd3da7 ...
	I0819 12:12:34.336600 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.b3cd3da7: {Name:mk5d6efebe3cd14d5e4ded3578c0fe43fc37706c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:34.336709 2104204 certs.go:381] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.b3cd3da7 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt
	I0819 12:12:34.336846 2104204 certs.go:385] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.b3cd3da7 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key
	I0819 12:12:34.336936 2104204 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key
	I0819 12:12:34.336964 2104204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt with IP's: []
	I0819 12:12:34.466046 2104204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt ...
	I0819 12:12:34.466079 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt: {Name:mk02a7937a51bf8400ff714216cc25074371bf5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:34.466284 2104204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key ...
	I0819 12:12:34.466299 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key: {Name:mk8a56d868ab63bdee37415360c66158a7db1d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:34.466401 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 12:12:34.466426 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 12:12:34.466446 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 12:12:34.466464 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 12:12:34.466481 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 12:12:34.466499 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 12:12:34.466517 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 12:12:34.466535 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 12:12:34.466601 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 12:12:34.466649 2104204 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 12:12:34.466662 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:12:34.466695 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 12:12:34.466725 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:12:34.466753 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 12:12:34.466809 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:12:34.466849 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /usr/share/ca-certificates/20941932.pem
	I0819 12:12:34.466870 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:12:34.466887 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem -> /usr/share/ca-certificates/2094193.pem
	I0819 12:12:34.467611 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:12:34.491994 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:12:34.513771 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:12:34.535665 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 12:12:34.557422 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 12:12:34.579963 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 12:12:34.602946 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:12:34.625167 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:12:34.647462 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 12:12:34.669898 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:12:34.691695 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 12:12:34.713382 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:12:34.728768 2104204 ssh_runner.go:195] Run: openssl version
	I0819 12:12:34.734128 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 12:12:34.744551 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 12:12:34.748621 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 12:12:34.748671 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 12:12:34.754036 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:12:34.765040 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:12:34.778502 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:12:34.783318 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:12:34.783388 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:12:34.796906 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:12:34.812958 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 12:12:34.825837 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 12:12:34.832477 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 12:12:34.832531 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 12:12:34.845387 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 12:12:34.857084 2104204 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:12:34.860880 2104204 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 12:12:34.860934 2104204 kubeadm.go:392] StartCluster: {Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:12:34.861024 2104204 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:12:34.861084 2104204 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:12:34.901686 2104204 cri.go:89] found id: ""
	I0819 12:12:34.901761 2104204 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 12:12:34.911175 2104204 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 12:12:34.920357 2104204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 12:12:34.929252 2104204 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 12:12:34.929269 2104204 kubeadm.go:157] found existing configuration files:
	
	I0819 12:12:34.929315 2104204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 12:12:34.937722 2104204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 12:12:34.937765 2104204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 12:12:34.946542 2104204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 12:12:34.955043 2104204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 12:12:34.955099 2104204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 12:12:34.963816 2104204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 12:12:34.972237 2104204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 12:12:34.972279 2104204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 12:12:34.980981 2104204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 12:12:34.989383 2104204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 12:12:34.989434 2104204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 12:12:34.998105 2104204 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 12:12:35.109103 2104204 kubeadm.go:310] W0819 12:12:35.090621     850 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 12:12:35.109876 2104204 kubeadm.go:310] W0819 12:12:35.091392     850 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 12:12:35.207387 2104204 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 12:12:49.106079 2104204 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 12:12:49.106170 2104204 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 12:12:49.106272 2104204 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 12:12:49.106408 2104204 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 12:12:49.106538 2104204 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 12:12:49.106657 2104204 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 12:12:49.108333 2104204 out.go:235]   - Generating certificates and keys ...
	I0819 12:12:49.108438 2104204 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 12:12:49.108532 2104204 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 12:12:49.108635 2104204 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 12:12:49.108710 2104204 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 12:12:49.108811 2104204 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 12:12:49.108885 2104204 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 12:12:49.108963 2104204 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 12:12:49.109064 2104204 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-584983 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0819 12:12:49.109127 2104204 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 12:12:49.109246 2104204 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-584983 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0819 12:12:49.109303 2104204 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 12:12:49.109359 2104204 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 12:12:49.109398 2104204 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 12:12:49.109475 2104204 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 12:12:49.109575 2104204 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 12:12:49.109651 2104204 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 12:12:49.109719 2104204 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 12:12:49.109806 2104204 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 12:12:49.109883 2104204 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 12:12:49.109972 2104204 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 12:12:49.110056 2104204 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 12:12:49.111542 2104204 out.go:235]   - Booting up control plane ...
	I0819 12:12:49.111640 2104204 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 12:12:49.111729 2104204 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 12:12:49.111828 2104204 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 12:12:49.111926 2104204 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 12:12:49.112018 2104204 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 12:12:49.112074 2104204 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 12:12:49.112208 2104204 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 12:12:49.112343 2104204 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 12:12:49.112419 2104204 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002296484s
	I0819 12:12:49.112499 2104204 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 12:12:49.112573 2104204 kubeadm.go:310] [api-check] The API server is healthy after 8.770810911s
	I0819 12:12:49.112727 2104204 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 12:12:49.112875 2104204 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 12:12:49.112961 2104204 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 12:12:49.113134 2104204 kubeadm.go:310] [mark-control-plane] Marking the node ha-584983 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 12:12:49.113182 2104204 kubeadm.go:310] [bootstrap-token] Using token: w0w3gq.45woyd02sscpvoqz
	I0819 12:12:49.114464 2104204 out.go:235]   - Configuring RBAC rules ...
	I0819 12:12:49.114555 2104204 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 12:12:49.114624 2104204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 12:12:49.114755 2104204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 12:12:49.114867 2104204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 12:12:49.114958 2104204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 12:12:49.115086 2104204 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 12:12:49.115222 2104204 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 12:12:49.115264 2104204 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 12:12:49.115307 2104204 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 12:12:49.115313 2104204 kubeadm.go:310] 
	I0819 12:12:49.115374 2104204 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 12:12:49.115380 2104204 kubeadm.go:310] 
	I0819 12:12:49.115465 2104204 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 12:12:49.115472 2104204 kubeadm.go:310] 
	I0819 12:12:49.115501 2104204 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 12:12:49.115568 2104204 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 12:12:49.115611 2104204 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 12:12:49.115620 2104204 kubeadm.go:310] 
	I0819 12:12:49.115666 2104204 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 12:12:49.115671 2104204 kubeadm.go:310] 
	I0819 12:12:49.115722 2104204 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 12:12:49.115728 2104204 kubeadm.go:310] 
	I0819 12:12:49.115779 2104204 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 12:12:49.115849 2104204 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 12:12:49.115914 2104204 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 12:12:49.115932 2104204 kubeadm.go:310] 
	I0819 12:12:49.116029 2104204 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 12:12:49.116097 2104204 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 12:12:49.116105 2104204 kubeadm.go:310] 
	I0819 12:12:49.116204 2104204 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w0w3gq.45woyd02sscpvoqz \
	I0819 12:12:49.116320 2104204 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 12:12:49.116361 2104204 kubeadm.go:310] 	--control-plane 
	I0819 12:12:49.116369 2104204 kubeadm.go:310] 
	I0819 12:12:49.116439 2104204 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 12:12:49.116446 2104204 kubeadm.go:310] 
	I0819 12:12:49.116520 2104204 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w0w3gq.45woyd02sscpvoqz \
	I0819 12:12:49.116645 2104204 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 12:12:49.116667 2104204 cni.go:84] Creating CNI manager for ""
	I0819 12:12:49.116678 2104204 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 12:12:49.118040 2104204 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 12:12:49.119156 2104204 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 12:12:49.124373 2104204 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 12:12:49.124398 2104204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 12:12:49.144567 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 12:12:49.506592 2104204 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 12:12:49.506695 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:12:49.506717 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-584983 minikube.k8s.io/updated_at=2024_08_19T12_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=ha-584983 minikube.k8s.io/primary=true
	I0819 12:12:49.552914 2104204 ops.go:34] apiserver oom_adj: -16
	I0819 12:12:49.759456 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:12:50.260269 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:12:50.759732 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:12:51.260179 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:12:51.397809 2104204 kubeadm.go:1113] duration metric: took 1.891185053s to wait for elevateKubeSystemPrivileges
	I0819 12:12:51.397851 2104204 kubeadm.go:394] duration metric: took 16.536920385s to StartCluster
	I0819 12:12:51.397877 2104204 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:51.397972 2104204 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:12:51.398984 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:12:51.399242 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 12:12:51.399257 2104204 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:12:51.399286 2104204 start.go:241] waiting for startup goroutines ...
	I0819 12:12:51.399296 2104204 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 12:12:51.399365 2104204 addons.go:69] Setting default-storageclass=true in profile "ha-584983"
	I0819 12:12:51.399370 2104204 addons.go:69] Setting storage-provisioner=true in profile "ha-584983"
	I0819 12:12:51.399407 2104204 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-584983"
	I0819 12:12:51.399412 2104204 addons.go:234] Setting addon storage-provisioner=true in "ha-584983"
	I0819 12:12:51.399444 2104204 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:12:51.399477 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:12:51.399840 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:12:51.399869 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:12:51.399900 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:12:51.399948 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:12:51.421032 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I0819 12:12:51.421185 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35655
	I0819 12:12:51.421626 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:12:51.421629 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:12:51.422205 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:12:51.422208 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:12:51.422238 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:12:51.422254 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:12:51.422625 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:12:51.422631 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:12:51.422894 2104204 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:12:51.423233 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:12:51.423266 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:12:51.424995 2104204 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:12:51.425231 2104204 kapi.go:59] client config for ha-584983: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.crt", KeyFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key", CAFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 12:12:51.425677 2104204 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 12:12:51.425960 2104204 addons.go:234] Setting addon default-storageclass=true in "ha-584983"
	I0819 12:12:51.426008 2104204 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:12:51.426274 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:12:51.426322 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:12:51.438922 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43535
	I0819 12:12:51.439388 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:12:51.439922 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:12:51.439948 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:12:51.440311 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:12:51.440517 2104204 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:12:51.440666 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43435
	I0819 12:12:51.440974 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:12:51.441403 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:12:51.441426 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:12:51.441721 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:12:51.442412 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:12:51.442452 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:12:51.442669 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:51.444383 2104204 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:12:51.445505 2104204 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:12:51.445521 2104204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 12:12:51.445542 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:51.448647 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:51.449119 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:51.449149 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:51.449280 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:51.449477 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:51.449641 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:51.449777 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:12:51.457718 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0819 12:12:51.458180 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:12:51.458622 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:12:51.458645 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:12:51.458984 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:12:51.459183 2104204 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:12:51.460752 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:12:51.460983 2104204 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 12:12:51.461003 2104204 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 12:12:51.461022 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:12:51.463519 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:51.463925 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:12:51.463942 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:12:51.464113 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:12:51.464285 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:12:51.464417 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:12:51.464560 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:12:51.560963 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 12:12:51.633328 2104204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 12:12:51.663129 2104204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:12:51.971457 2104204 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 12:12:51.971536 2104204 main.go:141] libmachine: Making call to close driver server
	I0819 12:12:51.971567 2104204 main.go:141] libmachine: (ha-584983) Calling .Close
	I0819 12:12:51.971890 2104204 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:12:51.971913 2104204 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:12:51.971926 2104204 main.go:141] libmachine: Making call to close driver server
	I0819 12:12:51.971936 2104204 main.go:141] libmachine: (ha-584983) Calling .Close
	I0819 12:12:51.971944 2104204 main.go:141] libmachine: (ha-584983) DBG | Closing plugin on server side
	I0819 12:12:51.972219 2104204 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:12:51.972241 2104204 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:12:51.972302 2104204 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 12:12:51.972334 2104204 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 12:12:51.972447 2104204 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 12:12:51.972458 2104204 round_trippers.go:469] Request Headers:
	I0819 12:12:51.972470 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:12:51.972477 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:12:51.983689 2104204 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0819 12:12:51.984331 2104204 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 12:12:51.984347 2104204 round_trippers.go:469] Request Headers:
	I0819 12:12:51.984363 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:12:51.984368 2104204 round_trippers.go:473]     Content-Type: application/json
	I0819 12:12:51.984374 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:12:51.986585 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:12:51.986749 2104204 main.go:141] libmachine: Making call to close driver server
	I0819 12:12:51.986765 2104204 main.go:141] libmachine: (ha-584983) Calling .Close
	I0819 12:12:51.987029 2104204 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:12:51.987051 2104204 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:12:52.163748 2104204 main.go:141] libmachine: Making call to close driver server
	I0819 12:12:52.163774 2104204 main.go:141] libmachine: (ha-584983) Calling .Close
	I0819 12:12:52.164125 2104204 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:12:52.164149 2104204 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:12:52.164172 2104204 main.go:141] libmachine: Making call to close driver server
	I0819 12:12:52.164177 2104204 main.go:141] libmachine: (ha-584983) DBG | Closing plugin on server side
	I0819 12:12:52.164182 2104204 main.go:141] libmachine: (ha-584983) Calling .Close
	I0819 12:12:52.164499 2104204 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:12:52.164595 2104204 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:12:52.164567 2104204 main.go:141] libmachine: (ha-584983) DBG | Closing plugin on server side
	I0819 12:12:52.165890 2104204 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 12:12:52.166840 2104204 addons.go:510] duration metric: took 767.541376ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 12:12:52.166878 2104204 start.go:246] waiting for cluster config update ...
	I0819 12:12:52.166891 2104204 start.go:255] writing updated cluster config ...
	I0819 12:12:52.168097 2104204 out.go:201] 
	I0819 12:12:52.169294 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:12:52.169394 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:12:52.170928 2104204 out.go:177] * Starting "ha-584983-m02" control-plane node in "ha-584983" cluster
	I0819 12:12:52.172185 2104204 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:12:52.172212 2104204 cache.go:56] Caching tarball of preloaded images
	I0819 12:12:52.172335 2104204 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:12:52.172354 2104204 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 12:12:52.172451 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:12:52.172672 2104204 start.go:360] acquireMachinesLock for ha-584983-m02: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:12:52.172737 2104204 start.go:364] duration metric: took 35.378µs to acquireMachinesLock for "ha-584983-m02"
	I0819 12:12:52.172765 2104204 start.go:93] Provisioning new machine with config: &{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:12:52.172870 2104204 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0819 12:12:52.174878 2104204 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 12:12:52.174966 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:12:52.174989 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:12:52.190089 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0819 12:12:52.190526 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:12:52.190957 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:12:52.190981 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:12:52.191295 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:12:52.191483 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetMachineName
	I0819 12:12:52.191630 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:12:52.191765 2104204 start.go:159] libmachine.API.Create for "ha-584983" (driver="kvm2")
	I0819 12:12:52.191796 2104204 client.go:168] LocalClient.Create starting
	I0819 12:12:52.191831 2104204 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem
	I0819 12:12:52.191873 2104204 main.go:141] libmachine: Decoding PEM data...
	I0819 12:12:52.191888 2104204 main.go:141] libmachine: Parsing certificate...
	I0819 12:12:52.191938 2104204 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem
	I0819 12:12:52.191961 2104204 main.go:141] libmachine: Decoding PEM data...
	I0819 12:12:52.191973 2104204 main.go:141] libmachine: Parsing certificate...
	I0819 12:12:52.191991 2104204 main.go:141] libmachine: Running pre-create checks...
	I0819 12:12:52.191999 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .PreCreateCheck
	I0819 12:12:52.192194 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetConfigRaw
	I0819 12:12:52.192614 2104204 main.go:141] libmachine: Creating machine...
	I0819 12:12:52.192633 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .Create
	I0819 12:12:52.192771 2104204 main.go:141] libmachine: (ha-584983-m02) Creating KVM machine...
	I0819 12:12:52.193817 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found existing default KVM network
	I0819 12:12:52.193975 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found existing private KVM network mk-ha-584983
	I0819 12:12:52.194093 2104204 main.go:141] libmachine: (ha-584983-m02) Setting up store path in /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02 ...
	I0819 12:12:52.194120 2104204 main.go:141] libmachine: (ha-584983-m02) Building disk image from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 12:12:52.194165 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:52.194070 2104574 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:12:52.194291 2104204 main.go:141] libmachine: (ha-584983-m02) Downloading /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 12:12:52.456627 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:52.456462 2104574 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa...
	I0819 12:12:52.631668 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:52.631547 2104574 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/ha-584983-m02.rawdisk...
	I0819 12:12:52.631700 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Writing magic tar header
	I0819 12:12:52.631712 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Writing SSH key tar header
	I0819 12:12:52.633049 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:52.632918 2104574 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02 ...
	I0819 12:12:52.633168 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02
	I0819 12:12:52.633198 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines
	I0819 12:12:52.633213 2104204 main.go:141] libmachine: (ha-584983-m02) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02 (perms=drwx------)
	I0819 12:12:52.633228 2104204 main.go:141] libmachine: (ha-584983-m02) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines (perms=drwxr-xr-x)
	I0819 12:12:52.633239 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:12:52.633258 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935
	I0819 12:12:52.633273 2104204 main.go:141] libmachine: (ha-584983-m02) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube (perms=drwxr-xr-x)
	I0819 12:12:52.633286 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 12:12:52.633298 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Checking permissions on dir: /home/jenkins
	I0819 12:12:52.633309 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Checking permissions on dir: /home
	I0819 12:12:52.633320 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Skipping /home - not owner
	I0819 12:12:52.633336 2104204 main.go:141] libmachine: (ha-584983-m02) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935 (perms=drwxrwxr-x)
	I0819 12:12:52.633345 2104204 main.go:141] libmachine: (ha-584983-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 12:12:52.633356 2104204 main.go:141] libmachine: (ha-584983-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 12:12:52.633363 2104204 main.go:141] libmachine: (ha-584983-m02) Creating domain...
	I0819 12:12:52.634304 2104204 main.go:141] libmachine: (ha-584983-m02) define libvirt domain using xml: 
	I0819 12:12:52.634328 2104204 main.go:141] libmachine: (ha-584983-m02) <domain type='kvm'>
	I0819 12:12:52.634337 2104204 main.go:141] libmachine: (ha-584983-m02)   <name>ha-584983-m02</name>
	I0819 12:12:52.634348 2104204 main.go:141] libmachine: (ha-584983-m02)   <memory unit='MiB'>2200</memory>
	I0819 12:12:52.634358 2104204 main.go:141] libmachine: (ha-584983-m02)   <vcpu>2</vcpu>
	I0819 12:12:52.634368 2104204 main.go:141] libmachine: (ha-584983-m02)   <features>
	I0819 12:12:52.634380 2104204 main.go:141] libmachine: (ha-584983-m02)     <acpi/>
	I0819 12:12:52.634389 2104204 main.go:141] libmachine: (ha-584983-m02)     <apic/>
	I0819 12:12:52.634399 2104204 main.go:141] libmachine: (ha-584983-m02)     <pae/>
	I0819 12:12:52.634408 2104204 main.go:141] libmachine: (ha-584983-m02)     
	I0819 12:12:52.634419 2104204 main.go:141] libmachine: (ha-584983-m02)   </features>
	I0819 12:12:52.634433 2104204 main.go:141] libmachine: (ha-584983-m02)   <cpu mode='host-passthrough'>
	I0819 12:12:52.634444 2104204 main.go:141] libmachine: (ha-584983-m02)   
	I0819 12:12:52.634452 2104204 main.go:141] libmachine: (ha-584983-m02)   </cpu>
	I0819 12:12:52.634471 2104204 main.go:141] libmachine: (ha-584983-m02)   <os>
	I0819 12:12:52.634483 2104204 main.go:141] libmachine: (ha-584983-m02)     <type>hvm</type>
	I0819 12:12:52.634491 2104204 main.go:141] libmachine: (ha-584983-m02)     <boot dev='cdrom'/>
	I0819 12:12:52.634507 2104204 main.go:141] libmachine: (ha-584983-m02)     <boot dev='hd'/>
	I0819 12:12:52.634519 2104204 main.go:141] libmachine: (ha-584983-m02)     <bootmenu enable='no'/>
	I0819 12:12:52.634528 2104204 main.go:141] libmachine: (ha-584983-m02)   </os>
	I0819 12:12:52.634536 2104204 main.go:141] libmachine: (ha-584983-m02)   <devices>
	I0819 12:12:52.634551 2104204 main.go:141] libmachine: (ha-584983-m02)     <disk type='file' device='cdrom'>
	I0819 12:12:52.634569 2104204 main.go:141] libmachine: (ha-584983-m02)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/boot2docker.iso'/>
	I0819 12:12:52.634580 2104204 main.go:141] libmachine: (ha-584983-m02)       <target dev='hdc' bus='scsi'/>
	I0819 12:12:52.634590 2104204 main.go:141] libmachine: (ha-584983-m02)       <readonly/>
	I0819 12:12:52.634599 2104204 main.go:141] libmachine: (ha-584983-m02)     </disk>
	I0819 12:12:52.634630 2104204 main.go:141] libmachine: (ha-584983-m02)     <disk type='file' device='disk'>
	I0819 12:12:52.634657 2104204 main.go:141] libmachine: (ha-584983-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 12:12:52.634675 2104204 main.go:141] libmachine: (ha-584983-m02)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/ha-584983-m02.rawdisk'/>
	I0819 12:12:52.634698 2104204 main.go:141] libmachine: (ha-584983-m02)       <target dev='hda' bus='virtio'/>
	I0819 12:12:52.634710 2104204 main.go:141] libmachine: (ha-584983-m02)     </disk>
	I0819 12:12:52.634724 2104204 main.go:141] libmachine: (ha-584983-m02)     <interface type='network'>
	I0819 12:12:52.634736 2104204 main.go:141] libmachine: (ha-584983-m02)       <source network='mk-ha-584983'/>
	I0819 12:12:52.634749 2104204 main.go:141] libmachine: (ha-584983-m02)       <model type='virtio'/>
	I0819 12:12:52.634759 2104204 main.go:141] libmachine: (ha-584983-m02)     </interface>
	I0819 12:12:52.634771 2104204 main.go:141] libmachine: (ha-584983-m02)     <interface type='network'>
	I0819 12:12:52.634781 2104204 main.go:141] libmachine: (ha-584983-m02)       <source network='default'/>
	I0819 12:12:52.634792 2104204 main.go:141] libmachine: (ha-584983-m02)       <model type='virtio'/>
	I0819 12:12:52.634806 2104204 main.go:141] libmachine: (ha-584983-m02)     </interface>
	I0819 12:12:52.634818 2104204 main.go:141] libmachine: (ha-584983-m02)     <serial type='pty'>
	I0819 12:12:52.634832 2104204 main.go:141] libmachine: (ha-584983-m02)       <target port='0'/>
	I0819 12:12:52.634843 2104204 main.go:141] libmachine: (ha-584983-m02)     </serial>
	I0819 12:12:52.634854 2104204 main.go:141] libmachine: (ha-584983-m02)     <console type='pty'>
	I0819 12:12:52.634866 2104204 main.go:141] libmachine: (ha-584983-m02)       <target type='serial' port='0'/>
	I0819 12:12:52.634876 2104204 main.go:141] libmachine: (ha-584983-m02)     </console>
	I0819 12:12:52.634887 2104204 main.go:141] libmachine: (ha-584983-m02)     <rng model='virtio'>
	I0819 12:12:52.634899 2104204 main.go:141] libmachine: (ha-584983-m02)       <backend model='random'>/dev/random</backend>
	I0819 12:12:52.634911 2104204 main.go:141] libmachine: (ha-584983-m02)     </rng>
	I0819 12:12:52.634921 2104204 main.go:141] libmachine: (ha-584983-m02)     
	I0819 12:12:52.634929 2104204 main.go:141] libmachine: (ha-584983-m02)     
	I0819 12:12:52.634938 2104204 main.go:141] libmachine: (ha-584983-m02)   </devices>
	I0819 12:12:52.634946 2104204 main.go:141] libmachine: (ha-584983-m02) </domain>
	I0819 12:12:52.634954 2104204 main.go:141] libmachine: (ha-584983-m02) 
	I0819 12:12:52.642110 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:d5:2f:89 in network default
	I0819 12:12:52.642724 2104204 main.go:141] libmachine: (ha-584983-m02) Ensuring networks are active...
	I0819 12:12:52.642741 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:52.643453 2104204 main.go:141] libmachine: (ha-584983-m02) Ensuring network default is active
	I0819 12:12:52.643785 2104204 main.go:141] libmachine: (ha-584983-m02) Ensuring network mk-ha-584983 is active
	I0819 12:12:52.644156 2104204 main.go:141] libmachine: (ha-584983-m02) Getting domain xml...
	I0819 12:12:52.644973 2104204 main.go:141] libmachine: (ha-584983-m02) Creating domain...
	I0819 12:12:53.897826 2104204 main.go:141] libmachine: (ha-584983-m02) Waiting to get IP...
	I0819 12:12:53.898857 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:53.899351 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:53.899383 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:53.899299 2104574 retry.go:31] will retry after 269.031574ms: waiting for machine to come up
	I0819 12:12:54.169941 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:54.170433 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:54.170466 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:54.170386 2104574 retry.go:31] will retry after 286.450158ms: waiting for machine to come up
	I0819 12:12:54.459019 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:54.459477 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:54.459503 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:54.459432 2104574 retry.go:31] will retry after 334.015871ms: waiting for machine to come up
	I0819 12:12:54.795032 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:54.795539 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:54.795567 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:54.795485 2104574 retry.go:31] will retry after 458.775137ms: waiting for machine to come up
	I0819 12:12:55.256213 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:55.256685 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:55.256716 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:55.256624 2104574 retry.go:31] will retry after 686.499173ms: waiting for machine to come up
	I0819 12:12:55.944647 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:55.945090 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:55.945115 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:55.945044 2104574 retry.go:31] will retry after 761.90458ms: waiting for machine to come up
	I0819 12:12:56.709095 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:56.709452 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:56.709476 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:56.709412 2104574 retry.go:31] will retry after 792.835586ms: waiting for machine to come up
	I0819 12:12:57.503750 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:57.504255 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:57.504276 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:57.504205 2104574 retry.go:31] will retry after 1.04667002s: waiting for machine to come up
	I0819 12:12:58.552082 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:12:58.552542 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:12:58.552575 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:12:58.552496 2104574 retry.go:31] will retry after 1.754183402s: waiting for machine to come up
	I0819 12:13:00.309257 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:00.309657 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:13:00.309688 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:13:00.309599 2104574 retry.go:31] will retry after 1.730425769s: waiting for machine to come up
	I0819 12:13:02.041517 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:02.042048 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:13:02.042077 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:13:02.041987 2104574 retry.go:31] will retry after 2.450526902s: waiting for machine to come up
	I0819 12:13:04.494260 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:04.494637 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:13:04.494665 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:13:04.494594 2104574 retry.go:31] will retry after 2.299318579s: waiting for machine to come up
	I0819 12:13:06.797309 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:06.797784 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:13:06.797814 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:13:06.797723 2104574 retry.go:31] will retry after 3.860898262s: waiting for machine to come up
	I0819 12:13:10.659927 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:10.660360 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find current IP address of domain ha-584983-m02 in network mk-ha-584983
	I0819 12:13:10.660384 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | I0819 12:13:10.660315 2104574 retry.go:31] will retry after 4.292802453s: waiting for machine to come up
	I0819 12:13:14.954373 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:14.954842 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has current primary IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:14.954865 2104204 main.go:141] libmachine: (ha-584983-m02) Found IP for machine: 192.168.39.84
	I0819 12:13:14.954879 2104204 main.go:141] libmachine: (ha-584983-m02) Reserving static IP address...
	I0819 12:13:14.955281 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | unable to find host DHCP lease matching {name: "ha-584983-m02", mac: "52:54:00:2d:f2:2d", ip: "192.168.39.84"} in network mk-ha-584983
	I0819 12:13:15.033627 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Getting to WaitForSSH function...
	I0819 12:13:15.033664 2104204 main.go:141] libmachine: (ha-584983-m02) Reserved static IP address: 192.168.39.84
	I0819 12:13:15.033679 2104204 main.go:141] libmachine: (ha-584983-m02) Waiting for SSH to be available...
	I0819 12:13:15.036210 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.036611 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.036650 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.036797 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Using SSH client type: external
	I0819 12:13:15.036822 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa (-rw-------)
	I0819 12:13:15.036849 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 12:13:15.036861 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | About to run SSH command:
	I0819 12:13:15.036878 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | exit 0
	I0819 12:13:15.159773 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | SSH cmd err, output: <nil>: 
	I0819 12:13:15.160031 2104204 main.go:141] libmachine: (ha-584983-m02) KVM machine creation complete!
	I0819 12:13:15.160377 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetConfigRaw
	I0819 12:13:15.160953 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:13:15.161202 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:13:15.161437 2104204 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 12:13:15.161460 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetState
	I0819 12:13:15.162675 2104204 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 12:13:15.162691 2104204 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 12:13:15.162699 2104204 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 12:13:15.162709 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:15.165525 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.165876 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.165913 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.166019 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:15.166206 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.166394 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.166530 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:15.166681 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:13:15.166911 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0819 12:13:15.166945 2104204 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 12:13:15.263398 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:13:15.263425 2104204 main.go:141] libmachine: Detecting the provisioner...
	I0819 12:13:15.263433 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:15.266341 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.266832 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.266862 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.267065 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:15.267278 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.267457 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.267606 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:15.267783 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:13:15.267969 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0819 12:13:15.267980 2104204 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 12:13:15.368570 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 12:13:15.368650 2104204 main.go:141] libmachine: found compatible host: buildroot
	I0819 12:13:15.368663 2104204 main.go:141] libmachine: Provisioning with buildroot...
	I0819 12:13:15.368677 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetMachineName
	I0819 12:13:15.368911 2104204 buildroot.go:166] provisioning hostname "ha-584983-m02"
	I0819 12:13:15.368939 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetMachineName
	I0819 12:13:15.369127 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:15.371769 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.372182 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.372211 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.372374 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:15.372552 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.372722 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.372851 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:15.373015 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:13:15.373213 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0819 12:13:15.373226 2104204 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-584983-m02 && echo "ha-584983-m02" | sudo tee /etc/hostname
	I0819 12:13:15.486473 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-584983-m02
	
	I0819 12:13:15.486504 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:15.489205 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.489560 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.489592 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.489766 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:15.489974 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.490161 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.490302 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:15.490458 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:13:15.490700 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0819 12:13:15.490724 2104204 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-584983-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-584983-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-584983-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:13:15.601391 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:13:15.601429 2104204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 12:13:15.601452 2104204 buildroot.go:174] setting up certificates
	I0819 12:13:15.601470 2104204 provision.go:84] configureAuth start
	I0819 12:13:15.601484 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetMachineName
	I0819 12:13:15.601783 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:13:15.604452 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.604821 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.604851 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.605013 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:15.607293 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.607599 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.607633 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.607774 2104204 provision.go:143] copyHostCerts
	I0819 12:13:15.607809 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:13:15.607851 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 12:13:15.607862 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:13:15.607957 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 12:13:15.608063 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:13:15.608099 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 12:13:15.608120 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:13:15.608185 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 12:13:15.608272 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:13:15.608295 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 12:13:15.608304 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:13:15.608342 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 12:13:15.608423 2104204 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.ha-584983-m02 san=[127.0.0.1 192.168.39.84 ha-584983-m02 localhost minikube]
	I0819 12:13:15.690822 2104204 provision.go:177] copyRemoteCerts
	I0819 12:13:15.690884 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:13:15.690918 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:15.693514 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.693827 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.693873 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.694023 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:15.694227 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.694390 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:15.694523 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	I0819 12:13:15.774217 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 12:13:15.774299 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 12:13:15.797945 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 12:13:15.798041 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 12:13:15.820941 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 12:13:15.821009 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 12:13:15.843686 2104204 provision.go:87] duration metric: took 242.200857ms to configureAuth
	I0819 12:13:15.843718 2104204 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:13:15.843978 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:13:15.844099 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:15.846609 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.846980 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:15.847011 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:15.847179 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:15.847371 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.847571 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:15.847759 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:15.847942 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:13:15.848188 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0819 12:13:15.848209 2104204 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:13:16.116033 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:13:16.116068 2104204 main.go:141] libmachine: Checking connection to Docker...
	I0819 12:13:16.116078 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetURL
	I0819 12:13:16.117506 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | Using libvirt version 6000000
	I0819 12:13:16.119776 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.120280 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:16.120310 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.120512 2104204 main.go:141] libmachine: Docker is up and running!
	I0819 12:13:16.120530 2104204 main.go:141] libmachine: Reticulating splines...
	I0819 12:13:16.120539 2104204 client.go:171] duration metric: took 23.928733735s to LocalClient.Create
	I0819 12:13:16.120571 2104204 start.go:167] duration metric: took 23.928807758s to libmachine.API.Create "ha-584983"
	I0819 12:13:16.120582 2104204 start.go:293] postStartSetup for "ha-584983-m02" (driver="kvm2")
	I0819 12:13:16.120593 2104204 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:13:16.120614 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:13:16.120877 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:13:16.120903 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:16.123684 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.124066 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:16.124095 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.124214 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:16.124426 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:16.124611 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:16.124749 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	I0819 12:13:16.202320 2104204 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:13:16.206514 2104204 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:13:16.206547 2104204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 12:13:16.206626 2104204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 12:13:16.206722 2104204 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 12:13:16.206733 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /etc/ssl/certs/20941932.pem
	I0819 12:13:16.206815 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:13:16.216307 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:13:16.239317 2104204 start.go:296] duration metric: took 118.712174ms for postStartSetup
	I0819 12:13:16.239379 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetConfigRaw
	I0819 12:13:16.240007 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:13:16.242470 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.242695 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:16.242724 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.242925 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:13:16.243171 2104204 start.go:128] duration metric: took 24.070287155s to createHost
	I0819 12:13:16.243194 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:16.245332 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.245607 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:16.245627 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.245818 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:16.245986 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:16.246145 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:16.246293 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:16.246440 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:13:16.246645 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0819 12:13:16.246655 2104204 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:13:16.345888 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724069596.304140552
	
	I0819 12:13:16.345915 2104204 fix.go:216] guest clock: 1724069596.304140552
	I0819 12:13:16.345923 2104204 fix.go:229] Guest: 2024-08-19 12:13:16.304140552 +0000 UTC Remote: 2024-08-19 12:13:16.243182787 +0000 UTC m=+69.285548649 (delta=60.957765ms)
	I0819 12:13:16.345939 2104204 fix.go:200] guest clock delta is within tolerance: 60.957765ms
	I0819 12:13:16.345945 2104204 start.go:83] releasing machines lock for "ha-584983-m02", held for 24.173196064s
	I0819 12:13:16.345964 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:13:16.346213 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:13:16.348615 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.348973 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:16.349003 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.350946 2104204 out.go:177] * Found network options:
	I0819 12:13:16.352133 2104204 out.go:177]   - NO_PROXY=192.168.39.12
	W0819 12:13:16.353200 2104204 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 12:13:16.353240 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:13:16.353704 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:13:16.353896 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:13:16.353998 2104204 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:13:16.354037 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	W0819 12:13:16.354124 2104204 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 12:13:16.354204 2104204 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:13:16.354231 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:13:16.356770 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.356843 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.357194 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:16.357220 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:16.357241 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.357277 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:16.357452 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:16.357567 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:13:16.357659 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:16.357733 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:13:16.357793 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:16.357903 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:13:16.357984 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	I0819 12:13:16.358024 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	I0819 12:13:16.584992 2104204 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:13:16.591193 2104204 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:13:16.591273 2104204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:13:16.609110 2104204 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 12:13:16.609150 2104204 start.go:495] detecting cgroup driver to use...
	I0819 12:13:16.609243 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:13:16.627685 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:13:16.641909 2104204 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:13:16.641983 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:13:16.655932 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:13:16.669791 2104204 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:13:16.787142 2104204 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:13:16.928691 2104204 docker.go:233] disabling docker service ...
	I0819 12:13:16.928790 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:13:16.954085 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:13:16.967075 2104204 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:13:17.096747 2104204 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:13:17.218115 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:13:17.231283 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:13:17.248621 2104204 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:13:17.248688 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:13:17.258707 2104204 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:13:17.258773 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:13:17.268699 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:13:17.278618 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:13:17.290741 2104204 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:13:17.301434 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:13:17.311429 2104204 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:13:17.327588 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:13:17.337676 2104204 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:13:17.347153 2104204 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 12:13:17.347207 2104204 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 12:13:17.360141 2104204 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:13:17.369969 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:13:17.510522 2104204 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:13:17.650044 2104204 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:13:17.650131 2104204 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:13:17.655040 2104204 start.go:563] Will wait 60s for crictl version
	I0819 12:13:17.655123 2104204 ssh_runner.go:195] Run: which crictl
	I0819 12:13:17.658744 2104204 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:13:17.697870 2104204 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:13:17.697953 2104204 ssh_runner.go:195] Run: crio --version
	I0819 12:13:17.725755 2104204 ssh_runner.go:195] Run: crio --version
	I0819 12:13:17.753744 2104204 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:13:17.755025 2104204 out.go:177]   - env NO_PROXY=192.168.39.12
	I0819 12:13:17.756219 2104204 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:13:17.759092 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:17.759514 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:13:06 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:13:17.759546 2104204 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:13:17.759749 2104204 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:13:17.763774 2104204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:13:17.775835 2104204 mustload.go:65] Loading cluster: ha-584983
	I0819 12:13:17.776043 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:13:17.776347 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:13:17.776391 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:13:17.791847 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36225
	I0819 12:13:17.792302 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:13:17.792768 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:13:17.792792 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:13:17.793107 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:13:17.793299 2104204 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:13:17.794851 2104204 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:13:17.795163 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:13:17.795199 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:13:17.809895 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33755
	I0819 12:13:17.810227 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:13:17.810699 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:13:17.810721 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:13:17.811093 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:13:17.811289 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:13:17.811468 2104204 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983 for IP: 192.168.39.84
	I0819 12:13:17.811480 2104204 certs.go:194] generating shared ca certs ...
	I0819 12:13:17.811493 2104204 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:13:17.811628 2104204 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 12:13:17.811667 2104204 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 12:13:17.811676 2104204 certs.go:256] generating profile certs ...
	I0819 12:13:17.811748 2104204 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key
	I0819 12:13:17.811772 2104204 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.6213bf22
	I0819 12:13:17.811789 2104204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.6213bf22 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.84 192.168.39.254]
	I0819 12:13:17.912093 2104204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.6213bf22 ...
	I0819 12:13:17.912134 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.6213bf22: {Name:mkf7c7e48974c30f09a51e784b35b6e29fbaf05b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:13:17.912352 2104204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.6213bf22 ...
	I0819 12:13:17.912367 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.6213bf22: {Name:mkbef6913de8786ee979fef7540740082bd9337a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:13:17.912454 2104204 certs.go:381] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.6213bf22 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt
	I0819 12:13:17.912603 2104204 certs.go:385] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.6213bf22 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key
	I0819 12:13:17.912746 2104204 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key
	I0819 12:13:17.912765 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 12:13:17.912781 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 12:13:17.912797 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 12:13:17.912812 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 12:13:17.912827 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 12:13:17.912841 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 12:13:17.912856 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 12:13:17.912870 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 12:13:17.912925 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 12:13:17.912957 2104204 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 12:13:17.912972 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:13:17.913001 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 12:13:17.913027 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:13:17.913055 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 12:13:17.913103 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:13:17.913138 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /usr/share/ca-certificates/20941932.pem
	I0819 12:13:17.913155 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:13:17.913167 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem -> /usr/share/ca-certificates/2094193.pem
	I0819 12:13:17.913222 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:13:17.916483 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:13:17.916872 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:13:17.916903 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:13:17.917087 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:13:17.917286 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:13:17.917455 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:13:17.917584 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:13:17.992523 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 12:13:17.997745 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 12:13:18.010072 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 12:13:18.014325 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 12:13:18.026626 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 12:13:18.031010 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 12:13:18.042543 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 12:13:18.046696 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0819 12:13:18.060239 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 12:13:18.064495 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 12:13:18.076366 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 12:13:18.080628 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0819 12:13:18.093027 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:13:18.121005 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:13:18.147183 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:13:18.170001 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 12:13:18.192130 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 12:13:18.214072 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 12:13:18.236341 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:13:18.258614 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:13:18.281364 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 12:13:18.304981 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:13:18.327846 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 12:13:18.351162 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 12:13:18.366683 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 12:13:18.382270 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 12:13:18.397604 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0819 12:13:18.413131 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 12:13:18.429199 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0819 12:13:18.447642 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 12:13:18.464137 2104204 ssh_runner.go:195] Run: openssl version
	I0819 12:13:18.469445 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 12:13:18.479674 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 12:13:18.484696 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 12:13:18.484746 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 12:13:18.492272 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:13:18.502754 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:13:18.513796 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:13:18.518268 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:13:18.518319 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:13:18.523951 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:13:18.534591 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 12:13:18.545352 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 12:13:18.549668 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 12:13:18.549715 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 12:13:18.555221 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 12:13:18.565441 2104204 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:13:18.569203 2104204 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 12:13:18.569270 2104204 kubeadm.go:934] updating node {m02 192.168.39.84 8443 v1.31.0 crio true true} ...
	I0819 12:13:18.569372 2104204 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-584983-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:13:18.569403 2104204 kube-vip.go:115] generating kube-vip config ...
	I0819 12:13:18.569442 2104204 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 12:13:18.583816 2104204 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 12:13:18.583882 2104204 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 12:13:18.583939 2104204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:13:18.596181 2104204 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 12:13:18.596244 2104204 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 12:13:18.605951 2104204 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 12:13:18.605980 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 12:13:18.606032 2104204 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 12:13:18.606062 2104204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 12:13:18.606065 2104204 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 12:13:18.610424 2104204 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 12:13:18.610452 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 12:13:19.217885 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 12:13:19.217977 2104204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 12:13:19.223091 2104204 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 12:13:19.223126 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 12:13:19.302418 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:13:19.339074 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 12:13:19.339199 2104204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 12:13:19.352911 2104204 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 12:13:19.352965 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 12:13:19.787052 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 12:13:19.796620 2104204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0819 12:13:19.812768 2104204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:13:19.828987 2104204 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 12:13:19.845609 2104204 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 12:13:19.849818 2104204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:13:19.861451 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:13:19.983011 2104204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:13:19.998915 2104204 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:13:19.999439 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:13:19.999496 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:13:20.016891 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0819 12:13:20.017359 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:13:20.017948 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:13:20.017975 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:13:20.018341 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:13:20.018586 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:13:20.018745 2104204 start.go:317] joinCluster: &{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:13:20.018853 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 12:13:20.018879 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:13:20.022083 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:13:20.022566 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:13:20.022596 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:13:20.022799 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:13:20.022987 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:13:20.023162 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:13:20.023336 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:13:20.166884 2104204 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:13:20.166936 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7rwmb2.eenai9a2que74cw8 --discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-584983-m02 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443"
	I0819 12:13:41.434545 2104204 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7rwmb2.eenai9a2que74cw8 --discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-584983-m02 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443": (21.267577558s)
	I0819 12:13:41.434592 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 12:13:42.093285 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-584983-m02 minikube.k8s.io/updated_at=2024_08_19T12_13_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=ha-584983 minikube.k8s.io/primary=false
	I0819 12:13:42.220239 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-584983-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 12:13:42.334084 2104204 start.go:319] duration metric: took 22.315331348s to joinCluster
	I0819 12:13:42.334202 2104204 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:13:42.334556 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:13:42.335698 2104204 out.go:177] * Verifying Kubernetes components...
	I0819 12:13:42.336881 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:13:42.604776 2104204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:13:42.660839 2104204 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:13:42.661134 2104204 kapi.go:59] client config for ha-584983: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.crt", KeyFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key", CAFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 12:13:42.661210 2104204 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.12:8443
	I0819 12:13:42.661466 2104204 node_ready.go:35] waiting up to 6m0s for node "ha-584983-m02" to be "Ready" ...
	I0819 12:13:42.661578 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:42.661589 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:42.661600 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:42.661608 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:42.712677 2104204 round_trippers.go:574] Response Status: 200 OK in 51 milliseconds
	I0819 12:13:43.161749 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:43.161775 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:43.161784 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:43.161788 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:43.175091 2104204 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0819 12:13:43.661811 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:43.661838 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:43.661850 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:43.661856 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:43.665180 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:44.161889 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:44.161919 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:44.161931 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:44.161936 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:44.165148 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:44.661957 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:44.661980 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:44.661990 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:44.661995 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:44.665188 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:44.665862 2104204 node_ready.go:53] node "ha-584983-m02" has status "Ready":"False"
	I0819 12:13:45.161929 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:45.161956 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:45.161967 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:45.161972 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:45.165189 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:45.662183 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:45.662216 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:45.662235 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:45.662241 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:45.665239 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:46.162091 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:46.162119 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:46.162128 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:46.162136 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:46.165208 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:46.662278 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:46.662307 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:46.662317 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:46.662323 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:46.665246 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:47.162131 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:47.162156 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:47.162165 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:47.162168 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:47.165302 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:47.166088 2104204 node_ready.go:53] node "ha-584983-m02" has status "Ready":"False"
	I0819 12:13:47.661922 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:47.661947 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:47.661957 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:47.661962 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:47.668230 2104204 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 12:13:48.162228 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:48.162256 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:48.162274 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:48.162291 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:48.165734 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:48.662622 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:48.662648 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:48.662655 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:48.662659 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:48.681038 2104204 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0819 12:13:49.161784 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:49.161810 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:49.161819 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:49.161824 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:49.164791 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:49.661880 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:49.661906 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:49.661915 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:49.661919 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:49.664936 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:49.665639 2104204 node_ready.go:53] node "ha-584983-m02" has status "Ready":"False"
	I0819 12:13:50.162389 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:50.162415 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:50.162423 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:50.162427 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:50.165446 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:50.661644 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:50.661670 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:50.661679 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:50.661684 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:50.664654 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:51.161667 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:51.161692 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:51.161700 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:51.161705 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:51.164747 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:51.661927 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:51.661953 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:51.661966 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:51.661971 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:51.665238 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:51.665748 2104204 node_ready.go:53] node "ha-584983-m02" has status "Ready":"False"
	I0819 12:13:52.162544 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:52.162574 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:52.162583 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:52.162587 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:52.166161 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:52.662677 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:52.662708 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:52.662718 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:52.662723 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:52.666365 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:53.162253 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:53.162277 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:53.162286 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:53.162291 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:53.165770 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:53.661871 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:53.661897 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:53.661905 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:53.661909 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:53.665448 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:53.665984 2104204 node_ready.go:53] node "ha-584983-m02" has status "Ready":"False"
	I0819 12:13:54.162481 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:54.162506 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:54.162520 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:54.162524 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:54.165761 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:54.661797 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:54.661826 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:54.661835 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:54.661840 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:54.664870 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:55.162164 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:55.162192 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:55.162204 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:55.162210 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:55.167627 2104204 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 12:13:55.661768 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:55.661798 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:55.661817 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:55.661822 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:55.665349 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:56.161811 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:56.161839 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.161848 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.161854 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.165879 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:13:56.167315 2104204 node_ready.go:53] node "ha-584983-m02" has status "Ready":"False"
	I0819 12:13:56.661653 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:56.661693 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.661703 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.661706 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.664744 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:56.665583 2104204 node_ready.go:49] node "ha-584983-m02" has status "Ready":"True"
	I0819 12:13:56.665604 2104204 node_ready.go:38] duration metric: took 14.004121231s for node "ha-584983-m02" to be "Ready" ...
	I0819 12:13:56.665614 2104204 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:13:56.665679 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:13:56.665688 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.665695 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.665699 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.669994 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:13:56.679022 2104204 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jgknw" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.679109 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-jgknw
	I0819 12:13:56.679119 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.679126 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.679130 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.681549 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:56.682126 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:56.682144 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.682151 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.682155 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.684476 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:56.685291 2104204 pod_ready.go:93] pod "coredns-6f6b679f8f-jgknw" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:56.685317 2104204 pod_ready.go:82] duration metric: took 6.264511ms for pod "coredns-6f6b679f8f-jgknw" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.685331 2104204 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wll5k" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.685393 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-wll5k
	I0819 12:13:56.685403 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.685414 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.685424 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.688058 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:56.688729 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:56.688742 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.688749 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.688754 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.690780 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:56.691097 2104204 pod_ready.go:93] pod "coredns-6f6b679f8f-wll5k" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:56.691112 2104204 pod_ready.go:82] duration metric: took 5.771616ms for pod "coredns-6f6b679f8f-wll5k" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.691120 2104204 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.691162 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-584983
	I0819 12:13:56.691169 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.691176 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.691182 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.693576 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:56.694036 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:56.694051 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.694057 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.694063 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.695969 2104204 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 12:13:56.696468 2104204 pod_ready.go:93] pod "etcd-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:56.696483 2104204 pod_ready.go:82] duration metric: took 5.357926ms for pod "etcd-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.696491 2104204 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.696531 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-584983-m02
	I0819 12:13:56.696538 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.696544 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.696550 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.698626 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:56.699057 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:56.699071 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.699077 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.699081 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.701012 2104204 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 12:13:56.701458 2104204 pod_ready.go:93] pod "etcd-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:56.701474 2104204 pod_ready.go:82] duration metric: took 4.976336ms for pod "etcd-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.701489 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:56.861811 2104204 request.go:632] Waited for 160.243017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983
	I0819 12:13:56.861880 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983
	I0819 12:13:56.861886 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:56.861894 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:56.861899 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:56.865173 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:57.062160 2104204 request.go:632] Waited for 196.228635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:57.062259 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:57.062271 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:57.062284 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:57.062296 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:57.065078 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:57.065629 2104204 pod_ready.go:93] pod "kube-apiserver-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:57.065652 2104204 pod_ready.go:82] duration metric: took 364.155693ms for pod "kube-apiserver-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:57.065667 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:57.262578 2104204 request.go:632] Waited for 196.810462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983-m02
	I0819 12:13:57.262649 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983-m02
	I0819 12:13:57.262655 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:57.262662 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:57.262671 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:57.265829 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:57.461778 2104204 request.go:632] Waited for 195.295532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:57.461878 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:57.461888 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:57.461901 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:57.461909 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:57.464889 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:57.465385 2104204 pod_ready.go:93] pod "kube-apiserver-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:57.465407 2104204 pod_ready.go:82] duration metric: took 399.732154ms for pod "kube-apiserver-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:57.465417 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:57.662439 2104204 request.go:632] Waited for 196.945806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983
	I0819 12:13:57.662556 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983
	I0819 12:13:57.662566 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:57.662579 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:57.662588 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:57.666215 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:57.861840 2104204 request.go:632] Waited for 194.771805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:57.861901 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:57.861906 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:57.861913 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:57.861919 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:57.865006 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:57.865691 2104204 pod_ready.go:93] pod "kube-controller-manager-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:57.865709 2104204 pod_ready.go:82] duration metric: took 400.285505ms for pod "kube-controller-manager-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:57.865719 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:58.061721 2104204 request.go:632] Waited for 195.8977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983-m02
	I0819 12:13:58.061795 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983-m02
	I0819 12:13:58.061801 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:58.061809 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:58.061813 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:58.064976 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:58.262238 2104204 request.go:632] Waited for 196.385787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:58.262318 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:58.262324 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:58.262332 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:58.262336 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:58.265171 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:58.265889 2104204 pod_ready.go:93] pod "kube-controller-manager-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:58.265913 2104204 pod_ready.go:82] duration metric: took 400.18757ms for pod "kube-controller-manager-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:58.265924 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cwlpd" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:58.461971 2104204 request.go:632] Waited for 195.950791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwlpd
	I0819 12:13:58.462045 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwlpd
	I0819 12:13:58.462053 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:58.462065 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:58.462072 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:58.465393 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:58.662437 2104204 request.go:632] Waited for 196.378761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:58.662500 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:58.662505 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:58.662513 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:58.662517 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:58.666609 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:13:58.667210 2104204 pod_ready.go:93] pod "kube-proxy-cwlpd" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:58.667237 2104204 pod_ready.go:82] duration metric: took 401.302617ms for pod "kube-proxy-cwlpd" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:58.667250 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w7x44" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:58.862220 2104204 request.go:632] Waited for 194.855301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7x44
	I0819 12:13:58.862313 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7x44
	I0819 12:13:58.862327 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:58.862343 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:58.862353 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:58.866569 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:13:59.062704 2104204 request.go:632] Waited for 195.356423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:59.062802 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:59.062808 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:59.062818 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:59.062826 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:59.065488 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:13:59.065978 2104204 pod_ready.go:93] pod "kube-proxy-w7x44" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:59.065999 2104204 pod_ready.go:82] duration metric: took 398.739905ms for pod "kube-proxy-w7x44" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:59.066011 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:59.262206 2104204 request.go:632] Waited for 196.11142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983
	I0819 12:13:59.262294 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983
	I0819 12:13:59.262301 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:59.262315 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:59.262319 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:59.265474 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:59.462668 2104204 request.go:632] Waited for 196.402992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:59.462730 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:13:59.462735 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:59.462743 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:59.462748 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:59.465782 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:59.466426 2104204 pod_ready.go:93] pod "kube-scheduler-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:59.466447 2104204 pod_ready.go:82] duration metric: took 400.429345ms for pod "kube-scheduler-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:59.466460 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:59.662387 2104204 request.go:632] Waited for 195.850288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983-m02
	I0819 12:13:59.662466 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983-m02
	I0819 12:13:59.662478 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:59.662488 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:59.662495 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:59.666763 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:13:59.861671 2104204 request.go:632] Waited for 194.314758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:59.861749 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:13:59.861756 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:59.861767 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:59.861780 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:59.864974 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:13:59.865590 2104204 pod_ready.go:93] pod "kube-scheduler-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:13:59.865613 2104204 pod_ready.go:82] duration metric: took 399.14712ms for pod "kube-scheduler-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:13:59.865625 2104204 pod_ready.go:39] duration metric: took 3.199999525s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:13:59.865661 2104204 api_server.go:52] waiting for apiserver process to appear ...
	I0819 12:13:59.865721 2104204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:13:59.882272 2104204 api_server.go:72] duration metric: took 17.548020274s to wait for apiserver process to appear ...
	I0819 12:13:59.882296 2104204 api_server.go:88] waiting for apiserver healthz status ...
	I0819 12:13:59.882317 2104204 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0819 12:13:59.886822 2104204 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
	ok
	I0819 12:13:59.886886 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/version
	I0819 12:13:59.886893 2104204 round_trippers.go:469] Request Headers:
	I0819 12:13:59.886901 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:13:59.886908 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:13:59.887696 2104204 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 12:13:59.887832 2104204 api_server.go:141] control plane version: v1.31.0
	I0819 12:13:59.887857 2104204 api_server.go:131] duration metric: took 5.552468ms to wait for apiserver health ...
	I0819 12:13:59.887867 2104204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 12:14:00.062453 2104204 request.go:632] Waited for 174.490561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:14:00.062551 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:14:00.062564 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:00.062577 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:00.062588 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:00.067794 2104204 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 12:14:00.072140 2104204 system_pods.go:59] 17 kube-system pods found
	I0819 12:14:00.072246 2104204 system_pods.go:61] "coredns-6f6b679f8f-jgknw" [7f0e1948-9d0c-49fd-aff6-d23c2d691e7d] Running
	I0819 12:14:00.072261 2104204 system_pods.go:61] "coredns-6f6b679f8f-wll5k" [8383b1b6-e917-411e-834f-13db39309ec2] Running
	I0819 12:14:00.072273 2104204 system_pods.go:61] "etcd-ha-584983" [32edd388-69fe-49c6-8d9b-f1fd7bbcd7b5] Running
	I0819 12:14:00.072280 2104204 system_pods.go:61] "etcd-ha-584983-m02" [eee1abee-b052-42b7-9401-623d23097668] Running
	I0819 12:14:00.072287 2104204 system_pods.go:61] "kindnet-dg8d6" [92ae558c-b6ea-4900-b534-ded9e58c1e96] Running
	I0819 12:14:00.072294 2104204 system_pods.go:61] "kindnet-pb989" [3bc8d323-6389-4720-b63a-63aa2717c2bd] Running
	I0819 12:14:00.072300 2104204 system_pods.go:61] "kube-apiserver-ha-584983" [5a547a01-8a0e-48f9-b1a5-b2337e3c79a2] Running
	I0819 12:14:00.072310 2104204 system_pods.go:61] "kube-apiserver-ha-584983-m02" [375be956-72f0-4e64-aa5e-572472963021] Running
	I0819 12:14:00.072317 2104204 system_pods.go:61] "kube-controller-manager-ha-584983" [b0c9e96e-637a-4011-b983-8ab78dbed853] Running
	I0819 12:14:00.072326 2104204 system_pods.go:61] "kube-controller-manager-ha-584983-m02" [e02c4c61-cbb5-456c-92f3-50e1988a0569] Running
	I0819 12:14:00.072333 2104204 system_pods.go:61] "kube-proxy-cwlpd" [6c237025-3826-4cb3-bd67-6600c73e2153] Running
	I0819 12:14:00.072349 2104204 system_pods.go:61] "kube-proxy-w7x44" [cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5] Running
	I0819 12:14:00.072358 2104204 system_pods.go:61] "kube-scheduler-ha-584983" [c4350173-0322-4268-b40c-ab76f38b2dc9] Running
	I0819 12:14:00.072365 2104204 system_pods.go:61] "kube-scheduler-ha-584983-m02" [c3d5440f-d2bb-4f22-8b62-86765206ec07] Running
	I0819 12:14:00.072374 2104204 system_pods.go:61] "kube-vip-ha-584983" [ed94966c-ea7e-4d24-88c3-2a5106f418d4] Running
	I0819 12:14:00.072380 2104204 system_pods.go:61] "kube-vip-ha-584983-m02" [2a456354-eb5d-45d1-89b3-911050797334] Running
	I0819 12:14:00.072386 2104204 system_pods.go:61] "storage-provisioner" [8b48e339-40f9-4517-b17d-1290ded5456e] Running
	I0819 12:14:00.072398 2104204 system_pods.go:74] duration metric: took 184.520222ms to wait for pod list to return data ...
	I0819 12:14:00.072413 2104204 default_sa.go:34] waiting for default service account to be created ...
	I0819 12:14:00.261689 2104204 request.go:632] Waited for 189.153459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0819 12:14:00.261779 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0819 12:14:00.261788 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:00.261797 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:00.261804 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:00.265763 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:00.266057 2104204 default_sa.go:45] found service account: "default"
	I0819 12:14:00.266082 2104204 default_sa.go:55] duration metric: took 193.659183ms for default service account to be created ...
	I0819 12:14:00.266095 2104204 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 12:14:00.462622 2104204 request.go:632] Waited for 196.406355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:14:00.462701 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:14:00.462710 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:00.462722 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:00.462731 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:00.467122 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:14:00.472750 2104204 system_pods.go:86] 17 kube-system pods found
	I0819 12:14:00.472780 2104204 system_pods.go:89] "coredns-6f6b679f8f-jgknw" [7f0e1948-9d0c-49fd-aff6-d23c2d691e7d] Running
	I0819 12:14:00.472790 2104204 system_pods.go:89] "coredns-6f6b679f8f-wll5k" [8383b1b6-e917-411e-834f-13db39309ec2] Running
	I0819 12:14:00.472796 2104204 system_pods.go:89] "etcd-ha-584983" [32edd388-69fe-49c6-8d9b-f1fd7bbcd7b5] Running
	I0819 12:14:00.472803 2104204 system_pods.go:89] "etcd-ha-584983-m02" [eee1abee-b052-42b7-9401-623d23097668] Running
	I0819 12:14:00.472809 2104204 system_pods.go:89] "kindnet-dg8d6" [92ae558c-b6ea-4900-b534-ded9e58c1e96] Running
	I0819 12:14:00.472816 2104204 system_pods.go:89] "kindnet-pb989" [3bc8d323-6389-4720-b63a-63aa2717c2bd] Running
	I0819 12:14:00.472826 2104204 system_pods.go:89] "kube-apiserver-ha-584983" [5a547a01-8a0e-48f9-b1a5-b2337e3c79a2] Running
	I0819 12:14:00.472835 2104204 system_pods.go:89] "kube-apiserver-ha-584983-m02" [375be956-72f0-4e64-aa5e-572472963021] Running
	I0819 12:14:00.472842 2104204 system_pods.go:89] "kube-controller-manager-ha-584983" [b0c9e96e-637a-4011-b983-8ab78dbed853] Running
	I0819 12:14:00.472851 2104204 system_pods.go:89] "kube-controller-manager-ha-584983-m02" [e02c4c61-cbb5-456c-92f3-50e1988a0569] Running
	I0819 12:14:00.472857 2104204 system_pods.go:89] "kube-proxy-cwlpd" [6c237025-3826-4cb3-bd67-6600c73e2153] Running
	I0819 12:14:00.472866 2104204 system_pods.go:89] "kube-proxy-w7x44" [cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5] Running
	I0819 12:14:00.472872 2104204 system_pods.go:89] "kube-scheduler-ha-584983" [c4350173-0322-4268-b40c-ab76f38b2dc9] Running
	I0819 12:14:00.472880 2104204 system_pods.go:89] "kube-scheduler-ha-584983-m02" [c3d5440f-d2bb-4f22-8b62-86765206ec07] Running
	I0819 12:14:00.472886 2104204 system_pods.go:89] "kube-vip-ha-584983" [ed94966c-ea7e-4d24-88c3-2a5106f418d4] Running
	I0819 12:14:00.472893 2104204 system_pods.go:89] "kube-vip-ha-584983-m02" [2a456354-eb5d-45d1-89b3-911050797334] Running
	I0819 12:14:00.472898 2104204 system_pods.go:89] "storage-provisioner" [8b48e339-40f9-4517-b17d-1290ded5456e] Running
	I0819 12:14:00.472912 2104204 system_pods.go:126] duration metric: took 206.804625ms to wait for k8s-apps to be running ...
	I0819 12:14:00.472925 2104204 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 12:14:00.472982 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:14:00.488525 2104204 system_svc.go:56] duration metric: took 15.583309ms WaitForService to wait for kubelet
	I0819 12:14:00.488559 2104204 kubeadm.go:582] duration metric: took 18.154308907s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:14:00.488580 2104204 node_conditions.go:102] verifying NodePressure condition ...
	I0819 12:14:00.661911 2104204 request.go:632] Waited for 173.219136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes
	I0819 12:14:00.661975 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes
	I0819 12:14:00.661982 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:00.661990 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:00.661994 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:00.665644 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:00.666545 2104204 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 12:14:00.666575 2104204 node_conditions.go:123] node cpu capacity is 2
	I0819 12:14:00.666591 2104204 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 12:14:00.666597 2104204 node_conditions.go:123] node cpu capacity is 2
	I0819 12:14:00.666605 2104204 node_conditions.go:105] duration metric: took 178.019671ms to run NodePressure ...
	I0819 12:14:00.666620 2104204 start.go:241] waiting for startup goroutines ...
	I0819 12:14:00.666657 2104204 start.go:255] writing updated cluster config ...
	I0819 12:14:00.668733 2104204 out.go:201] 
	I0819 12:14:00.669978 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:14:00.670073 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:14:00.671581 2104204 out.go:177] * Starting "ha-584983-m03" control-plane node in "ha-584983" cluster
	I0819 12:14:00.672629 2104204 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:14:00.672649 2104204 cache.go:56] Caching tarball of preloaded images
	I0819 12:14:00.672751 2104204 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:14:00.672762 2104204 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 12:14:00.672841 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:14:00.672996 2104204 start.go:360] acquireMachinesLock for ha-584983-m03: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:14:00.673038 2104204 start.go:364] duration metric: took 23.224µs to acquireMachinesLock for "ha-584983-m03"
	I0819 12:14:00.673059 2104204 start.go:93] Provisioning new machine with config: &{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:14:00.673148 2104204 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0819 12:14:00.674437 2104204 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 12:14:00.674518 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:14:00.674550 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:14:00.691854 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I0819 12:14:00.692336 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:14:00.692868 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:14:00.692891 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:14:00.693227 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:14:00.693428 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetMachineName
	I0819 12:14:00.693593 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:00.693773 2104204 start.go:159] libmachine.API.Create for "ha-584983" (driver="kvm2")
	I0819 12:14:00.693801 2104204 client.go:168] LocalClient.Create starting
	I0819 12:14:00.693833 2104204 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem
	I0819 12:14:00.693866 2104204 main.go:141] libmachine: Decoding PEM data...
	I0819 12:14:00.693882 2104204 main.go:141] libmachine: Parsing certificate...
	I0819 12:14:00.693934 2104204 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem
	I0819 12:14:00.693954 2104204 main.go:141] libmachine: Decoding PEM data...
	I0819 12:14:00.693965 2104204 main.go:141] libmachine: Parsing certificate...
	I0819 12:14:00.693980 2104204 main.go:141] libmachine: Running pre-create checks...
	I0819 12:14:00.693988 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .PreCreateCheck
	I0819 12:14:00.694155 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetConfigRaw
	I0819 12:14:00.694531 2104204 main.go:141] libmachine: Creating machine...
	I0819 12:14:00.694547 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .Create
	I0819 12:14:00.694689 2104204 main.go:141] libmachine: (ha-584983-m03) Creating KVM machine...
	I0819 12:14:00.695956 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found existing default KVM network
	I0819 12:14:00.696108 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found existing private KVM network mk-ha-584983
	I0819 12:14:00.696276 2104204 main.go:141] libmachine: (ha-584983-m03) Setting up store path in /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03 ...
	I0819 12:14:00.696302 2104204 main.go:141] libmachine: (ha-584983-m03) Building disk image from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 12:14:00.696363 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:00.696252 2104949 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:14:00.696461 2104204 main.go:141] libmachine: (ha-584983-m03) Downloading /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 12:14:00.999229 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:00.999099 2104949 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa...
	I0819 12:14:01.120569 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:01.120423 2104949 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/ha-584983-m03.rawdisk...
	I0819 12:14:01.120607 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Writing magic tar header
	I0819 12:14:01.120622 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Writing SSH key tar header
	I0819 12:14:01.120638 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:01.120552 2104949 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03 ...
	I0819 12:14:01.120655 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03
	I0819 12:14:01.120710 2104204 main.go:141] libmachine: (ha-584983-m03) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03 (perms=drwx------)
	I0819 12:14:01.120735 2104204 main.go:141] libmachine: (ha-584983-m03) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines (perms=drwxr-xr-x)
	I0819 12:14:01.120743 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines
	I0819 12:14:01.120757 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:14:01.120764 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935
	I0819 12:14:01.120773 2104204 main.go:141] libmachine: (ha-584983-m03) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube (perms=drwxr-xr-x)
	I0819 12:14:01.120782 2104204 main.go:141] libmachine: (ha-584983-m03) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935 (perms=drwxrwxr-x)
	I0819 12:14:01.120790 2104204 main.go:141] libmachine: (ha-584983-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 12:14:01.120798 2104204 main.go:141] libmachine: (ha-584983-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 12:14:01.120806 2104204 main.go:141] libmachine: (ha-584983-m03) Creating domain...
	I0819 12:14:01.120841 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 12:14:01.120869 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Checking permissions on dir: /home/jenkins
	I0819 12:14:01.120887 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Checking permissions on dir: /home
	I0819 12:14:01.120900 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Skipping /home - not owner
	I0819 12:14:01.121989 2104204 main.go:141] libmachine: (ha-584983-m03) define libvirt domain using xml: 
	I0819 12:14:01.122014 2104204 main.go:141] libmachine: (ha-584983-m03) <domain type='kvm'>
	I0819 12:14:01.122022 2104204 main.go:141] libmachine: (ha-584983-m03)   <name>ha-584983-m03</name>
	I0819 12:14:01.122031 2104204 main.go:141] libmachine: (ha-584983-m03)   <memory unit='MiB'>2200</memory>
	I0819 12:14:01.122036 2104204 main.go:141] libmachine: (ha-584983-m03)   <vcpu>2</vcpu>
	I0819 12:14:01.122041 2104204 main.go:141] libmachine: (ha-584983-m03)   <features>
	I0819 12:14:01.122046 2104204 main.go:141] libmachine: (ha-584983-m03)     <acpi/>
	I0819 12:14:01.122053 2104204 main.go:141] libmachine: (ha-584983-m03)     <apic/>
	I0819 12:14:01.122086 2104204 main.go:141] libmachine: (ha-584983-m03)     <pae/>
	I0819 12:14:01.122108 2104204 main.go:141] libmachine: (ha-584983-m03)     
	I0819 12:14:01.122116 2104204 main.go:141] libmachine: (ha-584983-m03)   </features>
	I0819 12:14:01.122124 2104204 main.go:141] libmachine: (ha-584983-m03)   <cpu mode='host-passthrough'>
	I0819 12:14:01.122129 2104204 main.go:141] libmachine: (ha-584983-m03)   
	I0819 12:14:01.122146 2104204 main.go:141] libmachine: (ha-584983-m03)   </cpu>
	I0819 12:14:01.122159 2104204 main.go:141] libmachine: (ha-584983-m03)   <os>
	I0819 12:14:01.122170 2104204 main.go:141] libmachine: (ha-584983-m03)     <type>hvm</type>
	I0819 12:14:01.122184 2104204 main.go:141] libmachine: (ha-584983-m03)     <boot dev='cdrom'/>
	I0819 12:14:01.122192 2104204 main.go:141] libmachine: (ha-584983-m03)     <boot dev='hd'/>
	I0819 12:14:01.122198 2104204 main.go:141] libmachine: (ha-584983-m03)     <bootmenu enable='no'/>
	I0819 12:14:01.122204 2104204 main.go:141] libmachine: (ha-584983-m03)   </os>
	I0819 12:14:01.122218 2104204 main.go:141] libmachine: (ha-584983-m03)   <devices>
	I0819 12:14:01.122228 2104204 main.go:141] libmachine: (ha-584983-m03)     <disk type='file' device='cdrom'>
	I0819 12:14:01.122237 2104204 main.go:141] libmachine: (ha-584983-m03)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/boot2docker.iso'/>
	I0819 12:14:01.122244 2104204 main.go:141] libmachine: (ha-584983-m03)       <target dev='hdc' bus='scsi'/>
	I0819 12:14:01.122250 2104204 main.go:141] libmachine: (ha-584983-m03)       <readonly/>
	I0819 12:14:01.122257 2104204 main.go:141] libmachine: (ha-584983-m03)     </disk>
	I0819 12:14:01.122263 2104204 main.go:141] libmachine: (ha-584983-m03)     <disk type='file' device='disk'>
	I0819 12:14:01.122271 2104204 main.go:141] libmachine: (ha-584983-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 12:14:01.122279 2104204 main.go:141] libmachine: (ha-584983-m03)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/ha-584983-m03.rawdisk'/>
	I0819 12:14:01.122290 2104204 main.go:141] libmachine: (ha-584983-m03)       <target dev='hda' bus='virtio'/>
	I0819 12:14:01.122298 2104204 main.go:141] libmachine: (ha-584983-m03)     </disk>
	I0819 12:14:01.122306 2104204 main.go:141] libmachine: (ha-584983-m03)     <interface type='network'>
	I0819 12:14:01.122339 2104204 main.go:141] libmachine: (ha-584983-m03)       <source network='mk-ha-584983'/>
	I0819 12:14:01.122363 2104204 main.go:141] libmachine: (ha-584983-m03)       <model type='virtio'/>
	I0819 12:14:01.122376 2104204 main.go:141] libmachine: (ha-584983-m03)     </interface>
	I0819 12:14:01.122391 2104204 main.go:141] libmachine: (ha-584983-m03)     <interface type='network'>
	I0819 12:14:01.122404 2104204 main.go:141] libmachine: (ha-584983-m03)       <source network='default'/>
	I0819 12:14:01.122415 2104204 main.go:141] libmachine: (ha-584983-m03)       <model type='virtio'/>
	I0819 12:14:01.122426 2104204 main.go:141] libmachine: (ha-584983-m03)     </interface>
	I0819 12:14:01.122433 2104204 main.go:141] libmachine: (ha-584983-m03)     <serial type='pty'>
	I0819 12:14:01.122446 2104204 main.go:141] libmachine: (ha-584983-m03)       <target port='0'/>
	I0819 12:14:01.122462 2104204 main.go:141] libmachine: (ha-584983-m03)     </serial>
	I0819 12:14:01.122472 2104204 main.go:141] libmachine: (ha-584983-m03)     <console type='pty'>
	I0819 12:14:01.122482 2104204 main.go:141] libmachine: (ha-584983-m03)       <target type='serial' port='0'/>
	I0819 12:14:01.122493 2104204 main.go:141] libmachine: (ha-584983-m03)     </console>
	I0819 12:14:01.122504 2104204 main.go:141] libmachine: (ha-584983-m03)     <rng model='virtio'>
	I0819 12:14:01.122518 2104204 main.go:141] libmachine: (ha-584983-m03)       <backend model='random'>/dev/random</backend>
	I0819 12:14:01.122537 2104204 main.go:141] libmachine: (ha-584983-m03)     </rng>
	I0819 12:14:01.122549 2104204 main.go:141] libmachine: (ha-584983-m03)     
	I0819 12:14:01.122554 2104204 main.go:141] libmachine: (ha-584983-m03)     
	I0819 12:14:01.122562 2104204 main.go:141] libmachine: (ha-584983-m03)   </devices>
	I0819 12:14:01.122568 2104204 main.go:141] libmachine: (ha-584983-m03) </domain>
	I0819 12:14:01.122579 2104204 main.go:141] libmachine: (ha-584983-m03) 
	I0819 12:14:01.129716 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:cf:3e:00 in network default
	I0819 12:14:01.130330 2104204 main.go:141] libmachine: (ha-584983-m03) Ensuring networks are active...
	I0819 12:14:01.130344 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:01.131189 2104204 main.go:141] libmachine: (ha-584983-m03) Ensuring network default is active
	I0819 12:14:01.131676 2104204 main.go:141] libmachine: (ha-584983-m03) Ensuring network mk-ha-584983 is active
	I0819 12:14:01.132029 2104204 main.go:141] libmachine: (ha-584983-m03) Getting domain xml...
	I0819 12:14:01.132846 2104204 main.go:141] libmachine: (ha-584983-m03) Creating domain...
	I0819 12:14:02.350986 2104204 main.go:141] libmachine: (ha-584983-m03) Waiting to get IP...
	I0819 12:14:02.351754 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:02.352177 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:02.352206 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:02.352146 2104949 retry.go:31] will retry after 254.721771ms: waiting for machine to come up
	I0819 12:14:02.608737 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:02.609367 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:02.609399 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:02.609289 2104949 retry.go:31] will retry after 312.724144ms: waiting for machine to come up
	I0819 12:14:02.923936 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:02.924539 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:02.924562 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:02.924471 2104949 retry.go:31] will retry after 460.616765ms: waiting for machine to come up
	I0819 12:14:03.387241 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:03.387739 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:03.387771 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:03.387682 2104949 retry.go:31] will retry after 600.970301ms: waiting for machine to come up
	I0819 12:14:03.990382 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:03.990754 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:03.990804 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:03.990704 2104949 retry.go:31] will retry after 566.877282ms: waiting for machine to come up
	I0819 12:14:04.559624 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:04.560109 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:04.560137 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:04.560045 2104949 retry.go:31] will retry after 941.381086ms: waiting for machine to come up
	I0819 12:14:05.503107 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:05.503554 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:05.503583 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:05.503501 2104949 retry.go:31] will retry after 1.11270475s: waiting for machine to come up
	I0819 12:14:06.617508 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:06.617944 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:06.617974 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:06.617885 2104949 retry.go:31] will retry after 1.096156751s: waiting for machine to come up
	I0819 12:14:07.715745 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:07.716174 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:07.716204 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:07.716121 2104949 retry.go:31] will retry after 1.73404849s: waiting for machine to come up
	I0819 12:14:09.452967 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:09.453368 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:09.453396 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:09.453311 2104949 retry.go:31] will retry after 2.304391913s: waiting for machine to come up
	I0819 12:14:11.759060 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:11.759568 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:11.759616 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:11.759546 2104949 retry.go:31] will retry after 2.82956272s: waiting for machine to come up
	I0819 12:14:14.592113 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:14.592490 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:14.592514 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:14.592435 2104949 retry.go:31] will retry after 2.648528724s: waiting for machine to come up
	I0819 12:14:17.243558 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:17.243935 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:17.243971 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:17.243885 2104949 retry.go:31] will retry after 3.594990198s: waiting for machine to come up
	I0819 12:14:20.841951 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:20.842397 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find current IP address of domain ha-584983-m03 in network mk-ha-584983
	I0819 12:14:20.842420 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | I0819 12:14:20.842372 2104949 retry.go:31] will retry after 4.780260026s: waiting for machine to come up
	I0819 12:14:25.623774 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.624126 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has current primary IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.624144 2104204 main.go:141] libmachine: (ha-584983-m03) Found IP for machine: 192.168.39.206
	I0819 12:14:25.624158 2104204 main.go:141] libmachine: (ha-584983-m03) Reserving static IP address...
	I0819 12:14:25.624625 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | unable to find host DHCP lease matching {name: "ha-584983-m03", mac: "52:54:00:5f:1d:ec", ip: "192.168.39.206"} in network mk-ha-584983
	I0819 12:14:25.703388 2104204 main.go:141] libmachine: (ha-584983-m03) Reserved static IP address: 192.168.39.206
	I0819 12:14:25.703426 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Getting to WaitForSSH function...
	I0819 12:14:25.703435 2104204 main.go:141] libmachine: (ha-584983-m03) Waiting for SSH to be available...
	I0819 12:14:25.706345 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.706741 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:25.706780 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.706907 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Using SSH client type: external
	I0819 12:14:25.706939 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa (-rw-------)
	I0819 12:14:25.706974 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 12:14:25.706987 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | About to run SSH command:
	I0819 12:14:25.707002 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | exit 0
	I0819 12:14:25.836287 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | SSH cmd err, output: <nil>: 
	I0819 12:14:25.836568 2104204 main.go:141] libmachine: (ha-584983-m03) KVM machine creation complete!
	I0819 12:14:25.836871 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetConfigRaw
	I0819 12:14:25.837476 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:25.837689 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:25.837852 2104204 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 12:14:25.837873 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetState
	I0819 12:14:25.839667 2104204 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 12:14:25.839683 2104204 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 12:14:25.839690 2104204 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 12:14:25.839696 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:25.842741 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.843150 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:25.843183 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.843325 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:25.843542 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:25.843712 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:25.843903 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:25.844074 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:14:25.844372 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0819 12:14:25.844385 2104204 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 12:14:25.947624 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:14:25.947651 2104204 main.go:141] libmachine: Detecting the provisioner...
	I0819 12:14:25.947661 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:25.950935 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.951375 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:25.951405 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:25.951659 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:25.951865 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:25.952044 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:25.952146 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:25.952371 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:14:25.952566 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0819 12:14:25.952579 2104204 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 12:14:26.056715 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 12:14:26.056789 2104204 main.go:141] libmachine: found compatible host: buildroot
	I0819 12:14:26.056797 2104204 main.go:141] libmachine: Provisioning with buildroot...
	I0819 12:14:26.056804 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetMachineName
	I0819 12:14:26.057069 2104204 buildroot.go:166] provisioning hostname "ha-584983-m03"
	I0819 12:14:26.057091 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetMachineName
	I0819 12:14:26.057259 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:26.060240 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.060658 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.060684 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.060856 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:26.061048 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:26.061225 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:26.061372 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:26.061522 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:14:26.061708 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0819 12:14:26.061724 2104204 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-584983-m03 && echo "ha-584983-m03" | sudo tee /etc/hostname
	I0819 12:14:26.186916 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-584983-m03
	
	I0819 12:14:26.186956 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:26.189828 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.190182 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.190217 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.190379 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:26.190629 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:26.190805 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:26.190958 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:26.191180 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:14:26.191368 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0819 12:14:26.191388 2104204 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-584983-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-584983-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-584983-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:14:26.305121 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:14:26.305153 2104204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 12:14:26.305176 2104204 buildroot.go:174] setting up certificates
	I0819 12:14:26.305188 2104204 provision.go:84] configureAuth start
	I0819 12:14:26.305197 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetMachineName
	I0819 12:14:26.305492 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:14:26.308001 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.308376 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.308412 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.308546 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:26.310734 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.311076 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.311095 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.311252 2104204 provision.go:143] copyHostCerts
	I0819 12:14:26.311286 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:14:26.311323 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 12:14:26.311332 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:14:26.311404 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 12:14:26.311474 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:14:26.311494 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 12:14:26.311498 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:14:26.311525 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 12:14:26.311573 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:14:26.311590 2104204 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 12:14:26.311596 2104204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:14:26.311618 2104204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 12:14:26.311669 2104204 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.ha-584983-m03 san=[127.0.0.1 192.168.39.206 ha-584983-m03 localhost minikube]
	I0819 12:14:26.569121 2104204 provision.go:177] copyRemoteCerts
	I0819 12:14:26.569187 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:14:26.569218 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:26.571873 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.572230 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.572264 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.572375 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:26.572581 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:26.572737 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:26.572838 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:14:26.654268 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 12:14:26.654339 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 12:14:26.678250 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 12:14:26.678329 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 12:14:26.701739 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 12:14:26.701809 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 12:14:26.724261 2104204 provision.go:87] duration metric: took 419.058353ms to configureAuth
	I0819 12:14:26.724291 2104204 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:14:26.724515 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:14:26.724602 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:26.727208 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.727602 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.727631 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.727768 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:26.727995 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:26.728206 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:26.728405 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:26.728612 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:14:26.728778 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0819 12:14:26.728793 2104204 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:14:26.992584 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:14:26.992619 2104204 main.go:141] libmachine: Checking connection to Docker...
	I0819 12:14:26.992631 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetURL
	I0819 12:14:26.994035 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | Using libvirt version 6000000
	I0819 12:14:26.996078 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.996722 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.996790 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.996848 2104204 main.go:141] libmachine: Docker is up and running!
	I0819 12:14:26.996881 2104204 main.go:141] libmachine: Reticulating splines...
	I0819 12:14:26.996891 2104204 client.go:171] duration metric: took 26.303082645s to LocalClient.Create
	I0819 12:14:26.996917 2104204 start.go:167] duration metric: took 26.303143911s to libmachine.API.Create "ha-584983"
	I0819 12:14:26.996931 2104204 start.go:293] postStartSetup for "ha-584983-m03" (driver="kvm2")
	I0819 12:14:26.996946 2104204 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:14:26.996970 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:26.997237 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:14:26.997265 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:26.999332 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.999694 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:26.999723 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:26.999912 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:27.000115 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:27.000277 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:27.000506 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:14:27.082814 2104204 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:14:27.087172 2104204 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:14:27.087292 2104204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 12:14:27.087365 2104204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 12:14:27.087441 2104204 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 12:14:27.087452 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /etc/ssl/certs/20941932.pem
	I0819 12:14:27.087538 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:14:27.096598 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:14:27.123121 2104204 start.go:296] duration metric: took 126.170256ms for postStartSetup
	I0819 12:14:27.123184 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetConfigRaw
	I0819 12:14:27.123820 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:14:27.126742 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.127155 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:27.127176 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.127439 2104204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:14:27.127643 2104204 start.go:128] duration metric: took 26.454483741s to createHost
	I0819 12:14:27.127669 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:27.130192 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.130594 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:27.130613 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.130817 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:27.131019 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:27.131171 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:27.131302 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:27.131452 2104204 main.go:141] libmachine: Using SSH client type: native
	I0819 12:14:27.131626 2104204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0819 12:14:27.131637 2104204 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:14:27.236813 2104204 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724069667.215694086
	
	I0819 12:14:27.236837 2104204 fix.go:216] guest clock: 1724069667.215694086
	I0819 12:14:27.236845 2104204 fix.go:229] Guest: 2024-08-19 12:14:27.215694086 +0000 UTC Remote: 2024-08-19 12:14:27.127658206 +0000 UTC m=+140.170024068 (delta=88.03588ms)
	I0819 12:14:27.236861 2104204 fix.go:200] guest clock delta is within tolerance: 88.03588ms
	I0819 12:14:27.236869 2104204 start.go:83] releasing machines lock for "ha-584983-m03", held for 26.563821598s
	I0819 12:14:27.236893 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:27.237203 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:14:27.239892 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.240335 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:27.240368 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.242273 2104204 out.go:177] * Found network options:
	I0819 12:14:27.243478 2104204 out.go:177]   - NO_PROXY=192.168.39.12,192.168.39.84
	W0819 12:14:27.244531 2104204 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 12:14:27.244557 2104204 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 12:14:27.244570 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:27.245052 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:27.245224 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:14:27.245325 2104204 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:14:27.245360 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	W0819 12:14:27.245438 2104204 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 12:14:27.245462 2104204 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 12:14:27.245531 2104204 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:14:27.245554 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:14:27.248065 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.248461 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:27.248483 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.248540 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.248697 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:27.248887 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:27.248949 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:27.248970 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:27.249067 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:27.249154 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:14:27.249266 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:14:27.249316 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:14:27.249452 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:14:27.249594 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:14:27.483338 2104204 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:14:27.489552 2104204 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:14:27.489625 2104204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:14:27.506080 2104204 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 12:14:27.506102 2104204 start.go:495] detecting cgroup driver to use...
	I0819 12:14:27.506167 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:14:27.521558 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:14:27.534443 2104204 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:14:27.534504 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:14:27.547226 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:14:27.560033 2104204 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:14:27.672349 2104204 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:14:27.816992 2104204 docker.go:233] disabling docker service ...
	I0819 12:14:27.817076 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:14:27.832296 2104204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:14:27.844787 2104204 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:14:27.961231 2104204 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:14:28.071288 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:14:28.084619 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:14:28.105116 2104204 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:14:28.105187 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:14:28.116093 2104204 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:14:28.116177 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:14:28.125944 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:14:28.135877 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:14:28.146561 2104204 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:14:28.157008 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:14:28.167047 2104204 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:14:28.184255 2104204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:14:28.195574 2104204 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:14:28.205700 2104204 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 12:14:28.205767 2104204 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 12:14:28.219766 2104204 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:14:28.230896 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:14:28.351467 2104204 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:14:28.485037 2104204 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:14:28.485133 2104204 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:14:28.490110 2104204 start.go:563] Will wait 60s for crictl version
	I0819 12:14:28.490186 2104204 ssh_runner.go:195] Run: which crictl
	I0819 12:14:28.493774 2104204 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:14:28.537688 2104204 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:14:28.537792 2104204 ssh_runner.go:195] Run: crio --version
	I0819 12:14:28.569296 2104204 ssh_runner.go:195] Run: crio --version
	I0819 12:14:28.598186 2104204 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:14:28.599268 2104204 out.go:177]   - env NO_PROXY=192.168.39.12
	I0819 12:14:28.600300 2104204 out.go:177]   - env NO_PROXY=192.168.39.12,192.168.39.84
	I0819 12:14:28.601126 2104204 main.go:141] libmachine: (ha-584983-m03) Calling .GetIP
	I0819 12:14:28.603797 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:28.604191 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:14:28.604233 2104204 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:14:28.604433 2104204 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:14:28.608873 2104204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:14:28.621020 2104204 mustload.go:65] Loading cluster: ha-584983
	I0819 12:14:28.621244 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:14:28.621496 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:14:28.621530 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:14:28.636419 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I0819 12:14:28.636831 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:14:28.637341 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:14:28.637364 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:14:28.637661 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:14:28.637830 2104204 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:14:28.639368 2104204 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:14:28.639669 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:14:28.639705 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:14:28.655414 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33645
	I0819 12:14:28.655810 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:14:28.656302 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:14:28.656326 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:14:28.656660 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:14:28.656871 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:14:28.657048 2104204 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983 for IP: 192.168.39.206
	I0819 12:14:28.657062 2104204 certs.go:194] generating shared ca certs ...
	I0819 12:14:28.657084 2104204 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:14:28.657227 2104204 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 12:14:28.657281 2104204 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 12:14:28.657302 2104204 certs.go:256] generating profile certs ...
	I0819 12:14:28.657402 2104204 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key
	I0819 12:14:28.657436 2104204 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e79f3b58
	I0819 12:14:28.657457 2104204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e79f3b58 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.84 192.168.39.206 192.168.39.254]
	I0819 12:14:28.773852 2104204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e79f3b58 ...
	I0819 12:14:28.773896 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e79f3b58: {Name:mk4cd36d1696e421580291e5fb6da02c4ccebbd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:14:28.774113 2104204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e79f3b58 ...
	I0819 12:14:28.774145 2104204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e79f3b58: {Name:mk079dcb9b42e265e68c0bc488eb971145164250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:14:28.774272 2104204 certs.go:381] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e79f3b58 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt
	I0819 12:14:28.774450 2104204 certs.go:385] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e79f3b58 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key
	I0819 12:14:28.774649 2104204 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key
	I0819 12:14:28.774672 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 12:14:28.774696 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 12:14:28.774721 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 12:14:28.774743 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 12:14:28.774773 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 12:14:28.774795 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 12:14:28.774826 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 12:14:28.774848 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 12:14:28.774924 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 12:14:28.774973 2104204 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 12:14:28.774989 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:14:28.775037 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 12:14:28.775089 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:14:28.775132 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 12:14:28.775196 2104204 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:14:28.775267 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:14:28.775308 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem -> /usr/share/ca-certificates/2094193.pem
	I0819 12:14:28.775330 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /usr/share/ca-certificates/20941932.pem
	I0819 12:14:28.775381 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:14:28.778609 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:14:28.779092 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:14:28.779120 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:14:28.779305 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:14:28.779521 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:14:28.779676 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:14:28.779796 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:14:28.856515 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 12:14:28.862405 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 12:14:28.875422 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 12:14:28.881054 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 12:14:28.891797 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 12:14:28.895792 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 12:14:28.905794 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 12:14:28.909941 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0819 12:14:28.920482 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 12:14:28.924756 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 12:14:28.934368 2104204 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 12:14:28.938349 2104204 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0819 12:14:28.948602 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:14:28.973356 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:14:28.996824 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:14:29.020087 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 12:14:29.044099 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0819 12:14:29.067678 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 12:14:29.091307 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:14:29.116066 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:14:29.138928 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:14:29.161806 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 12:14:29.184865 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 12:14:29.207794 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 12:14:29.224248 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 12:14:29.239897 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 12:14:29.255874 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0819 12:14:29.271437 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 12:14:29.290155 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0819 12:14:29.307303 2104204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 12:14:29.324624 2104204 ssh_runner.go:195] Run: openssl version
	I0819 12:14:29.330247 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 12:14:29.340676 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 12:14:29.344986 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 12:14:29.345045 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 12:14:29.350737 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:14:29.361299 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:14:29.371976 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:14:29.376477 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:14:29.376530 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:14:29.382102 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:14:29.393080 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 12:14:29.402898 2104204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 12:14:29.406969 2104204 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 12:14:29.407019 2104204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 12:14:29.412367 2104204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 12:14:29.422166 2104204 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:14:29.425960 2104204 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 12:14:29.426012 2104204 kubeadm.go:934] updating node {m03 192.168.39.206 8443 v1.31.0 crio true true} ...
	I0819 12:14:29.426148 2104204 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-584983-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:14:29.426186 2104204 kube-vip.go:115] generating kube-vip config ...
	I0819 12:14:29.426219 2104204 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 12:14:29.442533 2104204 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 12:14:29.442652 2104204 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 12:14:29.442716 2104204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:14:29.452086 2104204 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 12:14:29.452139 2104204 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 12:14:29.461645 2104204 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0819 12:14:29.461666 2104204 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0819 12:14:29.461690 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 12:14:29.461703 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:14:29.461701 2104204 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 12:14:29.462120 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 12:14:29.462253 2104204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 12:14:29.462253 2104204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 12:14:29.467676 2104204 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 12:14:29.467706 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 12:14:29.482750 2104204 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 12:14:29.482780 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 12:14:29.498325 2104204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 12:14:29.498443 2104204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 12:14:29.546603 2104204 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 12:14:29.546646 2104204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 12:14:30.298766 2104204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 12:14:30.309956 2104204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 12:14:30.326907 2104204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:14:30.343565 2104204 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 12:14:30.360101 2104204 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 12:14:30.363835 2104204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:14:30.376800 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:14:30.516076 2104204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:14:30.533984 2104204 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:14:30.534589 2104204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:14:30.534646 2104204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:14:30.550695 2104204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I0819 12:14:30.551220 2104204 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:14:30.551752 2104204 main.go:141] libmachine: Using API Version  1
	I0819 12:14:30.551775 2104204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:14:30.552098 2104204 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:14:30.552355 2104204 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:14:30.552546 2104204 start.go:317] joinCluster: &{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:14:30.552686 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 12:14:30.552704 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:14:30.555706 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:14:30.556186 2104204 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:14:30.556215 2104204 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:14:30.556404 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:14:30.556597 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:14:30.556787 2104204 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:14:30.556969 2104204 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:14:30.712114 2104204 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:14:30.712215 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1xu9cx.uq1y40mmh58x06ui --discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-584983-m03 --control-plane --apiserver-advertise-address=192.168.39.206 --apiserver-bind-port=8443"
	I0819 12:14:52.735810 2104204 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1xu9cx.uq1y40mmh58x06ui --discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-584983-m03 --control-plane --apiserver-advertise-address=192.168.39.206 --apiserver-bind-port=8443": (22.023557555s)
	I0819 12:14:52.735855 2104204 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 12:14:53.362522 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-584983-m03 minikube.k8s.io/updated_at=2024_08_19T12_14_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=ha-584983 minikube.k8s.io/primary=false
	I0819 12:14:53.499804 2104204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-584983-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 12:14:53.621707 2104204 start.go:319] duration metric: took 23.069156582s to joinCluster
	I0819 12:14:53.621813 2104204 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:14:53.622191 2104204 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:14:53.623181 2104204 out.go:177] * Verifying Kubernetes components...
	I0819 12:14:53.624234 2104204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:14:53.901854 2104204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:14:53.954902 2104204 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:14:53.955214 2104204 kapi.go:59] client config for ha-584983: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.crt", KeyFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key", CAFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 12:14:53.955290 2104204 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.12:8443
	I0819 12:14:53.955520 2104204 node_ready.go:35] waiting up to 6m0s for node "ha-584983-m03" to be "Ready" ...
	I0819 12:14:53.955617 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:53.955626 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:53.955636 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:53.955642 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:53.959497 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:54.456048 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:54.456071 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:54.456079 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:54.456082 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:54.459876 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:54.955914 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:54.955941 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:54.955949 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:54.955953 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:54.959378 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:55.456092 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:55.456123 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:55.456136 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:55.456143 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:55.459368 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:55.956329 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:55.956355 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:55.956363 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:55.956367 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:55.959453 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:55.959948 2104204 node_ready.go:53] node "ha-584983-m03" has status "Ready":"False"
	I0819 12:14:56.456394 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:56.456420 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:56.456431 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:56.456439 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:56.459564 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:56.956730 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:56.956757 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:56.956766 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:56.956778 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:56.960699 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:57.456007 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:57.456035 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:57.456048 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:57.456057 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:57.460225 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:14:57.956112 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:57.956137 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:57.956145 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:57.956150 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:57.959894 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:57.960970 2104204 node_ready.go:53] node "ha-584983-m03" has status "Ready":"False"
	I0819 12:14:58.456430 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:58.456460 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:58.456473 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:58.456480 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:58.460147 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:58.955709 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:58.955736 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:58.955745 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:58.955751 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:58.958939 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:59.455787 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:59.455810 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:59.455818 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:59.455823 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:59.458879 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:14:59.955810 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:14:59.955836 2104204 round_trippers.go:469] Request Headers:
	I0819 12:14:59.955844 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:14:59.955848 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:14:59.959134 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:00.456575 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:00.456607 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:00.456619 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:00.456630 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:00.459888 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:00.460622 2104204 node_ready.go:53] node "ha-584983-m03" has status "Ready":"False"
	I0819 12:15:00.955830 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:00.955857 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:00.955866 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:00.955869 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:00.959127 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:01.456360 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:01.456384 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:01.456393 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:01.456397 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:01.459675 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:01.956470 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:01.956496 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:01.956505 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:01.956509 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:01.959720 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:02.456600 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:02.456624 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:02.456633 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:02.456636 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:02.459651 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:02.956405 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:02.956430 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:02.956438 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:02.956442 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:02.960129 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:02.960713 2104204 node_ready.go:53] node "ha-584983-m03" has status "Ready":"False"
	I0819 12:15:03.456074 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:03.456101 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:03.456113 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:03.456120 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:03.459158 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:03.955928 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:03.955953 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:03.955962 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:03.955967 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:03.959473 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:04.456684 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:04.456709 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:04.456716 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:04.456720 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:04.459782 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:04.955826 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:04.955851 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:04.955859 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:04.955863 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:04.959157 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:05.456619 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:05.456643 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:05.456650 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:05.456654 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:05.459854 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:05.460820 2104204 node_ready.go:53] node "ha-584983-m03" has status "Ready":"False"
	I0819 12:15:05.956069 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:05.956092 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:05.956100 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:05.956106 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:05.959136 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:06.455813 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:06.455839 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:06.455848 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:06.455851 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:06.459182 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:06.956340 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:06.956363 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:06.956371 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:06.956376 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:06.960125 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:07.456415 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:07.456439 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.456452 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.456456 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.459570 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:07.956601 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:07.956630 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.956642 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.956648 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.959902 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:07.960630 2104204 node_ready.go:49] node "ha-584983-m03" has status "Ready":"True"
	I0819 12:15:07.960656 2104204 node_ready.go:38] duration metric: took 14.00511848s for node "ha-584983-m03" to be "Ready" ...
	I0819 12:15:07.960669 2104204 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:15:07.960760 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:15:07.960774 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.960785 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.960796 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.966580 2104204 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 12:15:07.973649 2104204 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jgknw" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.973735 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-jgknw
	I0819 12:15:07.973743 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.973752 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.973759 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.976531 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.977309 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:07.977324 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.977332 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.977338 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.979648 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.980271 2104204 pod_ready.go:93] pod "coredns-6f6b679f8f-jgknw" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:07.980289 2104204 pod_ready.go:82] duration metric: took 6.614781ms for pod "coredns-6f6b679f8f-jgknw" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.980297 2104204 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wll5k" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.980349 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-wll5k
	I0819 12:15:07.980356 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.980362 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.980366 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.983154 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.983855 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:07.983871 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.983878 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.983883 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.986190 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.986693 2104204 pod_ready.go:93] pod "coredns-6f6b679f8f-wll5k" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:07.986716 2104204 pod_ready.go:82] duration metric: took 6.41012ms for pod "coredns-6f6b679f8f-wll5k" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.986728 2104204 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.986796 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-584983
	I0819 12:15:07.986807 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.986817 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.986828 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.989085 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.989615 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:07.989631 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.989638 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.989642 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.991818 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.992253 2104204 pod_ready.go:93] pod "etcd-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:07.992271 2104204 pod_ready.go:82] duration metric: took 5.532521ms for pod "etcd-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.992279 2104204 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.992336 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-584983-m02
	I0819 12:15:07.992347 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.992356 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.992366 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.994606 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.995154 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:07.995167 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:07.995174 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:07.995179 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:07.997476 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:07.998086 2104204 pod_ready.go:93] pod "etcd-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:07.998106 2104204 pod_ready.go:82] duration metric: took 5.818832ms for pod "etcd-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:07.998117 2104204 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:08.157366 2104204 request.go:632] Waited for 159.156247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-584983-m03
	I0819 12:15:08.157430 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-584983-m03
	I0819 12:15:08.157435 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:08.157443 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:08.157447 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:08.160549 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:08.356620 2104204 request.go:632] Waited for 195.29374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:08.356695 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:08.356700 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:08.356708 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:08.356714 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:08.359573 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:08.360361 2104204 pod_ready.go:93] pod "etcd-ha-584983-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:08.360385 2104204 pod_ready.go:82] duration metric: took 362.260516ms for pod "etcd-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:08.360408 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:08.557432 2104204 request.go:632] Waited for 196.915612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983
	I0819 12:15:08.557499 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983
	I0819 12:15:08.557505 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:08.557513 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:08.557530 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:08.560573 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:08.757092 2104204 request.go:632] Waited for 195.861922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:08.757166 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:08.757172 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:08.757181 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:08.757189 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:08.760119 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:08.760622 2104204 pod_ready.go:93] pod "kube-apiserver-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:08.760643 2104204 pod_ready.go:82] duration metric: took 400.223597ms for pod "kube-apiserver-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:08.760653 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:08.956671 2104204 request.go:632] Waited for 195.921498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983-m02
	I0819 12:15:08.956739 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983-m02
	I0819 12:15:08.956746 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:08.956758 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:08.956769 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:08.960294 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:09.157632 2104204 request.go:632] Waited for 196.352944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:09.157693 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:09.157698 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:09.157707 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:09.157710 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:09.160781 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:09.161476 2104204 pod_ready.go:93] pod "kube-apiserver-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:09.161496 2104204 pod_ready.go:82] duration metric: took 400.837426ms for pod "kube-apiserver-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:09.161509 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:09.357516 2104204 request.go:632] Waited for 195.903665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983-m03
	I0819 12:15:09.357586 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-584983-m03
	I0819 12:15:09.357594 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:09.357602 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:09.357608 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:09.361067 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:09.557354 2104204 request.go:632] Waited for 195.250619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:09.557416 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:09.557421 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:09.557429 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:09.557434 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:09.560616 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:09.561242 2104204 pod_ready.go:93] pod "kube-apiserver-ha-584983-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:09.561260 2104204 pod_ready.go:82] duration metric: took 399.742194ms for pod "kube-apiserver-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:09.561275 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:09.757401 2104204 request.go:632] Waited for 195.982712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983
	I0819 12:15:09.757524 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983
	I0819 12:15:09.757539 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:09.757550 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:09.757555 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:09.761959 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:15:09.956990 2104204 request.go:632] Waited for 194.349441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:09.957075 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:09.957084 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:09.957097 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:09.957106 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:09.960308 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:09.960973 2104204 pod_ready.go:93] pod "kube-controller-manager-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:09.960996 2104204 pod_ready.go:82] duration metric: took 399.711575ms for pod "kube-controller-manager-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:09.961006 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:10.157545 2104204 request.go:632] Waited for 196.457169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983-m02
	I0819 12:15:10.157644 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983-m02
	I0819 12:15:10.157653 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:10.157661 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:10.157666 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:10.160852 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:10.357043 2104204 request.go:632] Waited for 195.359572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:10.357124 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:10.357131 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:10.357140 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:10.357144 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:10.360230 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:10.360778 2104204 pod_ready.go:93] pod "kube-controller-manager-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:10.360799 2104204 pod_ready.go:82] duration metric: took 399.786556ms for pod "kube-controller-manager-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:10.360808 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:10.556938 2104204 request.go:632] Waited for 196.047517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983-m03
	I0819 12:15:10.557022 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-584983-m03
	I0819 12:15:10.557028 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:10.557037 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:10.557043 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:10.560177 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:10.757474 2104204 request.go:632] Waited for 196.333304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:10.757550 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:10.757562 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:10.757579 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:10.757584 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:10.763131 2104204 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 12:15:10.763676 2104204 pod_ready.go:93] pod "kube-controller-manager-ha-584983-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:10.763703 2104204 pod_ready.go:82] duration metric: took 402.887961ms for pod "kube-controller-manager-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:10.763717 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cwlpd" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:10.956681 2104204 request.go:632] Waited for 192.846899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwlpd
	I0819 12:15:10.956762 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwlpd
	I0819 12:15:10.956768 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:10.956776 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:10.956780 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:10.960560 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:11.156751 2104204 request.go:632] Waited for 195.296467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:11.156825 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:11.156833 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:11.156845 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:11.156855 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:11.159753 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:11.160444 2104204 pod_ready.go:93] pod "kube-proxy-cwlpd" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:11.160465 2104204 pod_ready.go:82] duration metric: took 396.73822ms for pod "kube-proxy-cwlpd" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:11.160479 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f4pns" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:11.357618 2104204 request.go:632] Waited for 197.026377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f4pns
	I0819 12:15:11.357691 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f4pns
	I0819 12:15:11.357698 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:11.357710 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:11.357720 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:11.360860 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:11.557593 2104204 request.go:632] Waited for 196.016248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:11.557661 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:11.557666 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:11.557674 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:11.557679 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:11.560725 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:11.561312 2104204 pod_ready.go:93] pod "kube-proxy-f4pns" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:11.561334 2104204 pod_ready.go:82] duration metric: took 400.848183ms for pod "kube-proxy-f4pns" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:11.561344 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w7x44" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:11.757341 2104204 request.go:632] Waited for 195.909579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7x44
	I0819 12:15:11.757419 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7x44
	I0819 12:15:11.757424 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:11.757433 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:11.757437 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:11.760923 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:11.957181 2104204 request.go:632] Waited for 195.376815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:11.957280 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:11.957292 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:11.957302 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:11.957308 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:11.960296 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:11.960985 2104204 pod_ready.go:93] pod "kube-proxy-w7x44" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:11.961011 2104204 pod_ready.go:82] duration metric: took 399.658569ms for pod "kube-proxy-w7x44" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:11.961023 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:12.156616 2104204 request.go:632] Waited for 195.506599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983
	I0819 12:15:12.156694 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983
	I0819 12:15:12.156702 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:12.156714 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:12.156724 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:12.159566 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:12.357553 2104204 request.go:632] Waited for 197.442609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:12.357654 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983
	I0819 12:15:12.357665 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:12.357677 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:12.357688 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:12.360676 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:12.361282 2104204 pod_ready.go:93] pod "kube-scheduler-ha-584983" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:12.361303 2104204 pod_ready.go:82] duration metric: took 400.27142ms for pod "kube-scheduler-ha-584983" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:12.361316 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:12.557368 2104204 request.go:632] Waited for 195.948401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983-m02
	I0819 12:15:12.557431 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983-m02
	I0819 12:15:12.557437 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:12.557447 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:12.557453 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:12.560413 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:12.757364 2104204 request.go:632] Waited for 196.253239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:12.757423 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m02
	I0819 12:15:12.757428 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:12.757439 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:12.757444 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:12.760447 2104204 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 12:15:12.761072 2104204 pod_ready.go:93] pod "kube-scheduler-ha-584983-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:12.761091 2104204 pod_ready.go:82] duration metric: took 399.766426ms for pod "kube-scheduler-ha-584983-m02" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:12.761106 2104204 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:12.957072 2104204 request.go:632] Waited for 195.883758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983-m03
	I0819 12:15:12.957165 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-584983-m03
	I0819 12:15:12.957182 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:12.957193 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:12.957203 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:12.960674 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:13.156767 2104204 request.go:632] Waited for 195.280531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:13.156833 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-584983-m03
	I0819 12:15:13.156838 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:13.156845 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:13.156849 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:13.159878 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:13.160537 2104204 pod_ready.go:93] pod "kube-scheduler-ha-584983-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:13.160559 2104204 pod_ready.go:82] duration metric: took 399.441894ms for pod "kube-scheduler-ha-584983-m03" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:13.160574 2104204 pod_ready.go:39] duration metric: took 5.199891414s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:15:13.160596 2104204 api_server.go:52] waiting for apiserver process to appear ...
	I0819 12:15:13.160665 2104204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:15:13.175919 2104204 api_server.go:72] duration metric: took 19.554062808s to wait for apiserver process to appear ...
	I0819 12:15:13.175950 2104204 api_server.go:88] waiting for apiserver healthz status ...
	I0819 12:15:13.175977 2104204 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0819 12:15:13.182965 2104204 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
	ok
	I0819 12:15:13.183048 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/version
	I0819 12:15:13.183060 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:13.183075 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:13.183080 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:13.183893 2104204 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 12:15:13.183971 2104204 api_server.go:141] control plane version: v1.31.0
	I0819 12:15:13.183987 2104204 api_server.go:131] duration metric: took 8.029755ms to wait for apiserver health ...
	I0819 12:15:13.183995 2104204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 12:15:13.357361 2104204 request.go:632] Waited for 173.252198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:15:13.357422 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:15:13.357427 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:13.357435 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:13.357441 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:13.365236 2104204 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 12:15:13.371401 2104204 system_pods.go:59] 24 kube-system pods found
	I0819 12:15:13.371429 2104204 system_pods.go:61] "coredns-6f6b679f8f-jgknw" [7f0e1948-9d0c-49fd-aff6-d23c2d691e7d] Running
	I0819 12:15:13.371434 2104204 system_pods.go:61] "coredns-6f6b679f8f-wll5k" [8383b1b6-e917-411e-834f-13db39309ec2] Running
	I0819 12:15:13.371438 2104204 system_pods.go:61] "etcd-ha-584983" [32edd388-69fe-49c6-8d9b-f1fd7bbcd7b5] Running
	I0819 12:15:13.371442 2104204 system_pods.go:61] "etcd-ha-584983-m02" [eee1abee-b052-42b7-9401-623d23097668] Running
	I0819 12:15:13.371445 2104204 system_pods.go:61] "etcd-ha-584983-m03" [0b45768f-dbac-4e37-998c-8b7f46b894be] Running
	I0819 12:15:13.371454 2104204 system_pods.go:61] "kindnet-2llhd" [7b7f2267-915d-4346-857d-ce2721e90df5] Running
	I0819 12:15:13.371459 2104204 system_pods.go:61] "kindnet-dg8d6" [92ae558c-b6ea-4900-b534-ded9e58c1e96] Running
	I0819 12:15:13.371463 2104204 system_pods.go:61] "kindnet-pb989" [3bc8d323-6389-4720-b63a-63aa2717c2bd] Running
	I0819 12:15:13.371466 2104204 system_pods.go:61] "kube-apiserver-ha-584983" [5a547a01-8a0e-48f9-b1a5-b2337e3c79a2] Running
	I0819 12:15:13.371469 2104204 system_pods.go:61] "kube-apiserver-ha-584983-m02" [375be956-72f0-4e64-aa5e-572472963021] Running
	I0819 12:15:13.371473 2104204 system_pods.go:61] "kube-apiserver-ha-584983-m03" [bbba7718-8608-4d49-8a91-ca6e38bb0336] Running
	I0819 12:15:13.371476 2104204 system_pods.go:61] "kube-controller-manager-ha-584983" [b0c9e96e-637a-4011-b983-8ab78dbed853] Running
	I0819 12:15:13.371479 2104204 system_pods.go:61] "kube-controller-manager-ha-584983-m02" [e02c4c61-cbb5-456c-92f3-50e1988a0569] Running
	I0819 12:15:13.371483 2104204 system_pods.go:61] "kube-controller-manager-ha-584983-m03" [4368f2cb-2795-4d23-9aaa-627f9dc78f5e] Running
	I0819 12:15:13.371489 2104204 system_pods.go:61] "kube-proxy-cwlpd" [6c237025-3826-4cb3-bd67-6600c73e2153] Running
	I0819 12:15:13.371492 2104204 system_pods.go:61] "kube-proxy-f4pns" [fcbbba62-1d0c-4cf7-af69-cd041672818a] Running
	I0819 12:15:13.371495 2104204 system_pods.go:61] "kube-proxy-w7x44" [cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5] Running
	I0819 12:15:13.371498 2104204 system_pods.go:61] "kube-scheduler-ha-584983" [c4350173-0322-4268-b40c-ab76f38b2dc9] Running
	I0819 12:15:13.371501 2104204 system_pods.go:61] "kube-scheduler-ha-584983-m02" [c3d5440f-d2bb-4f22-8b62-86765206ec07] Running
	I0819 12:15:13.371504 2104204 system_pods.go:61] "kube-scheduler-ha-584983-m03" [2b919943-a75c-46fb-ba29-6ee8baf0426e] Running
	I0819 12:15:13.371507 2104204 system_pods.go:61] "kube-vip-ha-584983" [ed94966c-ea7e-4d24-88c3-2a5106f418d4] Running
	I0819 12:15:13.371510 2104204 system_pods.go:61] "kube-vip-ha-584983-m02" [2a456354-eb5d-45d1-89b3-911050797334] Running
	I0819 12:15:13.371513 2104204 system_pods.go:61] "kube-vip-ha-584983-m03" [534bd230-4210-4ff2-a529-cc23ee355ffb] Running
	I0819 12:15:13.371515 2104204 system_pods.go:61] "storage-provisioner" [8b48e339-40f9-4517-b17d-1290ded5456e] Running
	I0819 12:15:13.371522 2104204 system_pods.go:74] duration metric: took 187.519025ms to wait for pod list to return data ...
	I0819 12:15:13.371531 2104204 default_sa.go:34] waiting for default service account to be created ...
	I0819 12:15:13.556989 2104204 request.go:632] Waited for 185.366877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0819 12:15:13.557063 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0819 12:15:13.557071 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:13.557079 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:13.557086 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:13.560927 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:13.561076 2104204 default_sa.go:45] found service account: "default"
	I0819 12:15:13.561095 2104204 default_sa.go:55] duration metric: took 189.555851ms for default service account to be created ...
	I0819 12:15:13.561108 2104204 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 12:15:13.757173 2104204 request.go:632] Waited for 195.960052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:15:13.757245 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0819 12:15:13.757252 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:13.757263 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:13.757272 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:13.761728 2104204 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 12:15:13.768371 2104204 system_pods.go:86] 24 kube-system pods found
	I0819 12:15:13.768402 2104204 system_pods.go:89] "coredns-6f6b679f8f-jgknw" [7f0e1948-9d0c-49fd-aff6-d23c2d691e7d] Running
	I0819 12:15:13.768411 2104204 system_pods.go:89] "coredns-6f6b679f8f-wll5k" [8383b1b6-e917-411e-834f-13db39309ec2] Running
	I0819 12:15:13.768417 2104204 system_pods.go:89] "etcd-ha-584983" [32edd388-69fe-49c6-8d9b-f1fd7bbcd7b5] Running
	I0819 12:15:13.768423 2104204 system_pods.go:89] "etcd-ha-584983-m02" [eee1abee-b052-42b7-9401-623d23097668] Running
	I0819 12:15:13.768429 2104204 system_pods.go:89] "etcd-ha-584983-m03" [0b45768f-dbac-4e37-998c-8b7f46b894be] Running
	I0819 12:15:13.768435 2104204 system_pods.go:89] "kindnet-2llhd" [7b7f2267-915d-4346-857d-ce2721e90df5] Running
	I0819 12:15:13.768445 2104204 system_pods.go:89] "kindnet-dg8d6" [92ae558c-b6ea-4900-b534-ded9e58c1e96] Running
	I0819 12:15:13.768457 2104204 system_pods.go:89] "kindnet-pb989" [3bc8d323-6389-4720-b63a-63aa2717c2bd] Running
	I0819 12:15:13.768463 2104204 system_pods.go:89] "kube-apiserver-ha-584983" [5a547a01-8a0e-48f9-b1a5-b2337e3c79a2] Running
	I0819 12:15:13.768472 2104204 system_pods.go:89] "kube-apiserver-ha-584983-m02" [375be956-72f0-4e64-aa5e-572472963021] Running
	I0819 12:15:13.768481 2104204 system_pods.go:89] "kube-apiserver-ha-584983-m03" [bbba7718-8608-4d49-8a91-ca6e38bb0336] Running
	I0819 12:15:13.768489 2104204 system_pods.go:89] "kube-controller-manager-ha-584983" [b0c9e96e-637a-4011-b983-8ab78dbed853] Running
	I0819 12:15:13.768493 2104204 system_pods.go:89] "kube-controller-manager-ha-584983-m02" [e02c4c61-cbb5-456c-92f3-50e1988a0569] Running
	I0819 12:15:13.768500 2104204 system_pods.go:89] "kube-controller-manager-ha-584983-m03" [4368f2cb-2795-4d23-9aaa-627f9dc78f5e] Running
	I0819 12:15:13.768508 2104204 system_pods.go:89] "kube-proxy-cwlpd" [6c237025-3826-4cb3-bd67-6600c73e2153] Running
	I0819 12:15:13.768514 2104204 system_pods.go:89] "kube-proxy-f4pns" [fcbbba62-1d0c-4cf7-af69-cd041672818a] Running
	I0819 12:15:13.768519 2104204 system_pods.go:89] "kube-proxy-w7x44" [cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5] Running
	I0819 12:15:13.768525 2104204 system_pods.go:89] "kube-scheduler-ha-584983" [c4350173-0322-4268-b40c-ab76f38b2dc9] Running
	I0819 12:15:13.768528 2104204 system_pods.go:89] "kube-scheduler-ha-584983-m02" [c3d5440f-d2bb-4f22-8b62-86765206ec07] Running
	I0819 12:15:13.768534 2104204 system_pods.go:89] "kube-scheduler-ha-584983-m03" [2b919943-a75c-46fb-ba29-6ee8baf0426e] Running
	I0819 12:15:13.768538 2104204 system_pods.go:89] "kube-vip-ha-584983" [ed94966c-ea7e-4d24-88c3-2a5106f418d4] Running
	I0819 12:15:13.768545 2104204 system_pods.go:89] "kube-vip-ha-584983-m02" [2a456354-eb5d-45d1-89b3-911050797334] Running
	I0819 12:15:13.768549 2104204 system_pods.go:89] "kube-vip-ha-584983-m03" [534bd230-4210-4ff2-a529-cc23ee355ffb] Running
	I0819 12:15:13.768555 2104204 system_pods.go:89] "storage-provisioner" [8b48e339-40f9-4517-b17d-1290ded5456e] Running
	I0819 12:15:13.768561 2104204 system_pods.go:126] duration metric: took 207.44777ms to wait for k8s-apps to be running ...
	I0819 12:15:13.768571 2104204 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 12:15:13.768623 2104204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:15:13.784297 2104204 system_svc.go:56] duration metric: took 15.714034ms WaitForService to wait for kubelet
	I0819 12:15:13.784328 2104204 kubeadm.go:582] duration metric: took 20.162478235s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:15:13.784356 2104204 node_conditions.go:102] verifying NodePressure condition ...
	I0819 12:15:13.956764 2104204 request.go:632] Waited for 172.282363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes
	I0819 12:15:13.956830 2104204 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes
	I0819 12:15:13.956837 2104204 round_trippers.go:469] Request Headers:
	I0819 12:15:13.956849 2104204 round_trippers.go:473]     Accept: application/json, */*
	I0819 12:15:13.956859 2104204 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 12:15:13.960451 2104204 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 12:15:13.961376 2104204 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 12:15:13.961397 2104204 node_conditions.go:123] node cpu capacity is 2
	I0819 12:15:13.961410 2104204 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 12:15:13.961414 2104204 node_conditions.go:123] node cpu capacity is 2
	I0819 12:15:13.961417 2104204 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 12:15:13.961421 2104204 node_conditions.go:123] node cpu capacity is 2
	I0819 12:15:13.961424 2104204 node_conditions.go:105] duration metric: took 177.063761ms to run NodePressure ...
	I0819 12:15:13.961436 2104204 start.go:241] waiting for startup goroutines ...
	I0819 12:15:13.961461 2104204 start.go:255] writing updated cluster config ...
	I0819 12:15:13.961749 2104204 ssh_runner.go:195] Run: rm -f paused
	I0819 12:15:14.014910 2104204 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 12:15:14.016961 2104204 out.go:177] * Done! kubectl is now configured to use "ha-584983" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.507568636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069984507544215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d6a53dd-02cd-4138-80c8-865ea84c7122 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.508062181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9cb1477-39e2-4727-980f-7622bba70c25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.508115736Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9cb1477-39e2-4727-980f-7622bba70c25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.508338789Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069716321576219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586380218262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586331821904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a144ce90fec4a3b9641d2b329d74eea7d24853fc8e12a26ce60dccdd8cd534,PodSandboxId:ce13f99cb40a3735c5d8b29338c4df855cda711e7b6ddbeb55524a7fbfc40fe2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724069586228951996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724069574416775079,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406957
2336617782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993c24f05aeae1e90f7f0d7497ad21ef61f5591c51c27f0b57cb784b5c205744,PodSandboxId:a59ca849892dfd95d5e1a304407e55aaccdf6cbc1f3a27135389823bbc93f643,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406956230
6258863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9795400a82b7a6e66117dbdcd869ddc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069560632009013,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7957c6abca008ceb73562acb3a07469d02c969d7d57c6b92056e92768484729d,PodSandboxId:9c0652a3e0b8ecf15b79d44ec5322029418a078cc4844f48249fe928cf7b0aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069560556942735,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65afb8623fcea98f124cd8de3fb6f430c09728b1f5a6ec5fa0ce329353270857,PodSandboxId:2ce334e1bfd5f239f65e692452112744a87327fbd727509d862cc03f54516de4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069560598208864,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069560496327269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9cb1477-39e2-4727-980f-7622bba70c25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.548405035Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc216885-5089-40e5-878c-1a6765484a2f name=/runtime.v1.RuntimeService/Version
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.548534266Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc216885-5089-40e5-878c-1a6765484a2f name=/runtime.v1.RuntimeService/Version
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.549778954Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49256853-55fc-4dd8-b953-6c97934e7779 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.550206057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069984550186427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49256853-55fc-4dd8-b953-6c97934e7779 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.550706246Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1cda8b0b-9199-4dd2-8b6d-648c4502fe38 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.550779735Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1cda8b0b-9199-4dd2-8b6d-648c4502fe38 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.551015247Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069716321576219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586380218262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586331821904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a144ce90fec4a3b9641d2b329d74eea7d24853fc8e12a26ce60dccdd8cd534,PodSandboxId:ce13f99cb40a3735c5d8b29338c4df855cda711e7b6ddbeb55524a7fbfc40fe2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724069586228951996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724069574416775079,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406957
2336617782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993c24f05aeae1e90f7f0d7497ad21ef61f5591c51c27f0b57cb784b5c205744,PodSandboxId:a59ca849892dfd95d5e1a304407e55aaccdf6cbc1f3a27135389823bbc93f643,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406956230
6258863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9795400a82b7a6e66117dbdcd869ddc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069560632009013,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7957c6abca008ceb73562acb3a07469d02c969d7d57c6b92056e92768484729d,PodSandboxId:9c0652a3e0b8ecf15b79d44ec5322029418a078cc4844f48249fe928cf7b0aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069560556942735,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65afb8623fcea98f124cd8de3fb6f430c09728b1f5a6ec5fa0ce329353270857,PodSandboxId:2ce334e1bfd5f239f65e692452112744a87327fbd727509d862cc03f54516de4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069560598208864,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069560496327269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1cda8b0b-9199-4dd2-8b6d-648c4502fe38 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.593304766Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc392aa9-a857-4093-8cb3-20510c40be93 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.593429361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc392aa9-a857-4093-8cb3-20510c40be93 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.594996883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fda9ece2-0749-43a7-8523-fdf713276902 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.595418822Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069984595398601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fda9ece2-0749-43a7-8523-fdf713276902 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.596068952Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17374a17-a589-4951-8b03-65f4b5d06b56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.596140976Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17374a17-a589-4951-8b03-65f4b5d06b56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.596383196Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069716321576219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586380218262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586331821904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a144ce90fec4a3b9641d2b329d74eea7d24853fc8e12a26ce60dccdd8cd534,PodSandboxId:ce13f99cb40a3735c5d8b29338c4df855cda711e7b6ddbeb55524a7fbfc40fe2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724069586228951996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724069574416775079,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406957
2336617782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993c24f05aeae1e90f7f0d7497ad21ef61f5591c51c27f0b57cb784b5c205744,PodSandboxId:a59ca849892dfd95d5e1a304407e55aaccdf6cbc1f3a27135389823bbc93f643,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406956230
6258863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9795400a82b7a6e66117dbdcd869ddc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069560632009013,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7957c6abca008ceb73562acb3a07469d02c969d7d57c6b92056e92768484729d,PodSandboxId:9c0652a3e0b8ecf15b79d44ec5322029418a078cc4844f48249fe928cf7b0aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069560556942735,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65afb8623fcea98f124cd8de3fb6f430c09728b1f5a6ec5fa0ce329353270857,PodSandboxId:2ce334e1bfd5f239f65e692452112744a87327fbd727509d862cc03f54516de4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069560598208864,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069560496327269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17374a17-a589-4951-8b03-65f4b5d06b56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.638870777Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d929ad40-495f-4017-90c4-2f34ad505de7 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.638940934Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d929ad40-495f-4017-90c4-2f34ad505de7 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.639807752Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e371718b-a601-4959-aa1a-e4b265be8ac5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.640255503Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069984640230825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e371718b-a601-4959-aa1a-e4b265be8ac5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.640722152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c363d62-51da-4030-8e04-b0cacce9e857 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.640768875Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c363d62-51da-4030-8e04-b0cacce9e857 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:19:44 ha-584983 crio[678]: time="2024-08-19 12:19:44.640998206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069716321576219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586380218262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069586331821904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a144ce90fec4a3b9641d2b329d74eea7d24853fc8e12a26ce60dccdd8cd534,PodSandboxId:ce13f99cb40a3735c5d8b29338c4df855cda711e7b6ddbeb55524a7fbfc40fe2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724069586228951996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724069574416775079,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406957
2336617782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993c24f05aeae1e90f7f0d7497ad21ef61f5591c51c27f0b57cb784b5c205744,PodSandboxId:a59ca849892dfd95d5e1a304407e55aaccdf6cbc1f3a27135389823bbc93f643,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406956230
6258863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9795400a82b7a6e66117dbdcd869ddc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069560632009013,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7957c6abca008ceb73562acb3a07469d02c969d7d57c6b92056e92768484729d,PodSandboxId:9c0652a3e0b8ecf15b79d44ec5322029418a078cc4844f48249fe928cf7b0aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069560556942735,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65afb8623fcea98f124cd8de3fb6f430c09728b1f5a6ec5fa0ce329353270857,PodSandboxId:2ce334e1bfd5f239f65e692452112744a87327fbd727509d862cc03f54516de4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069560598208864,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069560496327269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c363d62-51da-4030-8e04-b0cacce9e857 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c12861dc1b990       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   f1429b3feda56       busybox-7dff88458-s954n
	8f808fb6e4a29       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   4d323e7bde365       coredns-6f6b679f8f-jgknw
	214bc022395ba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   9fff4f7ac43f0       coredns-6f6b679f8f-wll5k
	c6a144ce90fec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   ce13f99cb40a3       storage-provisioner
	668ac1e0d9a1b       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   a9564aacd2862       kindnet-dg8d6
	6bcc112384a64       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   f34234b07799c       kube-proxy-w7x44
	993c24f05aeae       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   a59ca849892df       kube-vip-ha-584983
	799883f5e6773       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   22491c1bbcc37       kube-scheduler-ha-584983
	65afb8623fcea       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   2ce334e1bfd5f       kube-apiserver-ha-584983
	7957c6abca008       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   9c0652a3e0b8e       kube-controller-manager-ha-584983
	e2afc8c50fc96       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   f3aa1be314868       etcd-ha-584983
	
	
	==> coredns [214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577] <==
	[INFO] 10.244.1.2:57350 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000149447s
	[INFO] 10.244.1.2:55845 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000072579s
	[INFO] 10.244.2.2:37261 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.045080098s
	[INFO] 10.244.2.2:51973 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000220525s
	[INFO] 10.244.2.2:34558 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142147s
	[INFO] 10.244.2.2:33150 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000263427s
	[INFO] 10.244.0.4:58833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122286s
	[INFO] 10.244.0.4:37194 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129784s
	[INFO] 10.244.1.2:34921 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001829614s
	[INFO] 10.244.1.2:53910 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115811s
	[INFO] 10.244.1.2:37254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008251s
	[INFO] 10.244.2.2:43347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012466s
	[INFO] 10.244.2.2:51494 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000210784s
	[INFO] 10.244.2.2:36691 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100028s
	[INFO] 10.244.0.4:39140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124263s
	[INFO] 10.244.0.4:51765 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099031s
	[INFO] 10.244.0.4:47207 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071543s
	[INFO] 10.244.0.4:37638 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047976s
	[INFO] 10.244.1.2:36335 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080226s
	[INFO] 10.244.2.2:35526 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107074s
	[INFO] 10.244.2.2:52104 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134936s
	[INFO] 10.244.0.4:37351 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075755s
	[INFO] 10.244.0.4:47268 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121675s
	[INFO] 10.244.1.2:55348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134871s
	[INFO] 10.244.1.2:40841 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064834s
	
	
	==> coredns [8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43] <==
	[INFO] 10.244.2.2:59285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158842s
	[INFO] 10.244.2.2:55773 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000212182s
	[INFO] 10.244.2.2:34121 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214046s
	[INFO] 10.244.2.2:56792 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.042778287s
	[INFO] 10.244.0.4:59540 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001673972s
	[INFO] 10.244.0.4:42038 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221674s
	[INFO] 10.244.0.4:46483 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00106179s
	[INFO] 10.244.0.4:48125 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091001s
	[INFO] 10.244.0.4:37333 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069922s
	[INFO] 10.244.0.4:50093 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066014s
	[INFO] 10.244.1.2:35478 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106276s
	[INFO] 10.244.1.2:32788 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077319s
	[INFO] 10.244.1.2:48500 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062887s
	[INFO] 10.244.1.2:52624 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001380815s
	[INFO] 10.244.1.2:35526 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000215556s
	[INFO] 10.244.2.2:42664 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000169406s
	[INFO] 10.244.1.2:47636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107912s
	[INFO] 10.244.1.2:57549 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000279969s
	[INFO] 10.244.1.2:37605 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079457s
	[INFO] 10.244.2.2:59708 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107128s
	[INFO] 10.244.2.2:54205 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121358s
	[INFO] 10.244.0.4:57068 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104078s
	[INFO] 10.244.0.4:48200 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000268832s
	[INFO] 10.244.1.2:48471 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138271s
	[INFO] 10.244.1.2:34249 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107161s
	
	
	==> describe nodes <==
	Name:               ha-584983
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-584983
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=ha-584983
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T12_12_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:12:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-584983
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:19:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:15:21 +0000   Mon, 19 Aug 2024 12:12:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:15:21 +0000   Mon, 19 Aug 2024 12:12:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:15:21 +0000   Mon, 19 Aug 2024 12:12:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:15:21 +0000   Mon, 19 Aug 2024 12:13:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    ha-584983
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9cd8edc3a2c24fa5a0144251d1079f81
	  System UUID:                9cd8edc3-a2c2-4fa5-a014-4251d1079f81
	  Boot ID:                    c299daa9-50a5-44ae-ad64-3cd1ed4c7fa4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s954n              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 coredns-6f6b679f8f-jgknw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m53s
	  kube-system                 coredns-6f6b679f8f-wll5k             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m53s
	  kube-system                 etcd-ha-584983                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m56s
	  kube-system                 kindnet-dg8d6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m53s
	  kube-system                 kube-apiserver-ha-584983             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 kube-controller-manager-ha-584983    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 kube-proxy-w7x44                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m53s
	  kube-system                 kube-scheduler-ha-584983             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 kube-vip-ha-584983                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m52s  kube-proxy       
	  Normal  Starting                 6m56s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m56s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m56s  kubelet          Node ha-584983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m56s  kubelet          Node ha-584983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m56s  kubelet          Node ha-584983 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m54s  node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	  Normal  NodeReady                6m39s  kubelet          Node ha-584983 status is now: NodeReady
	  Normal  RegisteredNode           5m57s  node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	  Normal  RegisteredNode           4m46s  node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	
	
	Name:               ha-584983-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-584983-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=ha-584983
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_13_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:13:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-584983-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:16:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 12:15:40 +0000   Mon, 19 Aug 2024 12:17:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 12:15:40 +0000   Mon, 19 Aug 2024 12:17:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 12:15:40 +0000   Mon, 19 Aug 2024 12:17:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 12:15:40 +0000   Mon, 19 Aug 2024 12:17:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-584983-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c94b4594d79f4ec0a8dd96aab1847eb5
	  System UUID:                c94b4594-d79f-4ec0-a8dd-96aab1847eb5
	  Boot ID:                    28d9d5da-0b2e-4025-a636-4d1384214658
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xjjwj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 etcd-ha-584983-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m6s
	  kube-system                 kindnet-pb989                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m6s
	  kube-system                 kube-apiserver-ha-584983-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-controller-manager-ha-584983-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-proxy-cwlpd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-scheduler-ha-584983-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-vip-ha-584983-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m1s                 kube-proxy       
	  Normal  CIDRAssignmentFailed     6m6s                 cidrAllocator    Node ha-584983-m02 status is now: CIDRAssignmentFailed
	  Normal  Starting                 6m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m6s (x8 over 6m6s)  kubelet          Node ha-584983-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s (x8 over 6m6s)  kubelet          Node ha-584983-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x7 over 6m6s)  kubelet          Node ha-584983-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m4s                 node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  RegisteredNode           5m57s                node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  RegisteredNode           4m46s                node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  NodeNotReady             2m41s                node-controller  Node ha-584983-m02 status is now: NodeNotReady
	
	
	Name:               ha-584983-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-584983-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=ha-584983
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_14_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:14:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-584983-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:19:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:15:19 +0000   Mon, 19 Aug 2024 12:14:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:15:19 +0000   Mon, 19 Aug 2024 12:14:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:15:19 +0000   Mon, 19 Aug 2024 12:14:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:15:19 +0000   Mon, 19 Aug 2024 12:15:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    ha-584983-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3194fc0bd2184b688dfa8865a1e17091
	  System UUID:                3194fc0b-d218-4b68-8dfa-8865a1e17091
	  Boot ID:                    320ac109-a844-470c-bcd1-726d71c129c2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hxh55                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 etcd-ha-584983-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m55s
	  kube-system                 kindnet-2llhd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m56s
	  kube-system                 kube-apiserver-ha-584983-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-controller-manager-ha-584983-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-proxy-f4pns                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-scheduler-ha-584983-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-vip-ha-584983-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m57s (x8 over 4m57s)  kubelet          Node ha-584983-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s (x8 over 4m57s)  kubelet          Node ha-584983-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s (x7 over 4m57s)  kubelet          Node ha-584983-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     4m56s                  cidrAllocator    Node ha-584983-m03 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           4m55s                  node-controller  Node ha-584983-m03 event: Registered Node ha-584983-m03 in Controller
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-584983-m03 event: Registered Node ha-584983-m03 in Controller
	  Normal  RegisteredNode           4m47s                  node-controller  Node ha-584983-m03 event: Registered Node ha-584983-m03 in Controller
	
	
	Name:               ha-584983-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-584983-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=ha-584983
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_15_50_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:15:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-584983-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:19:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:16:20 +0000   Mon, 19 Aug 2024 12:15:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:16:20 +0000   Mon, 19 Aug 2024 12:15:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:16:20 +0000   Mon, 19 Aug 2024 12:15:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:16:20 +0000   Mon, 19 Aug 2024 12:16:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-584983-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cfad94302a8543c8a125516d743187f8
	  System UUID:                cfad9430-2a85-43c8-a125-516d743187f8
	  Boot ID:                    1a8eea34-dade-46ac-a76a-f1a3f4167ca5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-cjfsg       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m55s
	  kube-system                 kube-proxy-rd7nb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  Starting                 3m56s                  kubelet          Starting kubelet.
	  Normal  CIDRAssignmentFailed     3m55s                  cidrAllocator    Node ha-584983-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m55s (x2 over 3m56s)  kubelet          Node ha-584983-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m55s (x2 over 3m56s)  kubelet          Node ha-584983-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m55s (x2 over 3m56s)  kubelet          Node ha-584983-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal  NodeReady                3m37s                  kubelet          Node ha-584983-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug19 12:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050355] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.755816] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.506391] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.876325] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.203525] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.059214] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056195] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.190463] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.123484] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.274294] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +3.959189] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +3.909497] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.056139] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.254427] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.090121] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.241682] kauditd_printk_skb: 60 callbacks suppressed
	[Aug19 12:13] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7] <==
	{"level":"warn","ts":"2024-08-19T12:19:44.926185Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:44.929856Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:44.942140Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:44.943305Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:44.949636Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:44.956190Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:44.963510Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:44.966753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:44.973687Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:44.980015Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:44.985786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:44.989626Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:44.992770Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:45.000356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:45.006356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:45.012309Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:45.016260Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:45.019034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:45.023107Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:45.029601Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:45.035512Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:45.042756Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:45.051181Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:45.070998Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:19:45.073211Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:19:45 up 7 min,  0 users,  load average: 0.13, 0.21, 0.12
	Linux ha-584983 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb] <==
	I0819 12:19:05.493153       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:19:15.491750       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:19:15.491929       1 main.go:299] handling current node
	I0819 12:19:15.492682       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:19:15.492804       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:19:15.493068       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0819 12:19:15.493121       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:19:15.493251       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:19:15.493301       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	I0819 12:19:25.494622       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:19:25.494677       1 main.go:299] handling current node
	I0819 12:19:25.494690       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:19:25.494696       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:19:25.494845       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0819 12:19:25.494873       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:19:25.494926       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:19:25.494947       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	I0819 12:19:35.487684       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:19:35.487785       1 main.go:299] handling current node
	I0819 12:19:35.487828       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:19:35.487850       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:19:35.488028       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0819 12:19:35.488053       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:19:35.488180       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:19:35.488212       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [65afb8623fcea98f124cd8de3fb6f430c09728b1f5a6ec5fa0ce329353270857] <==
	I0819 12:12:45.469745       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0819 12:12:45.477529       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.12]
	I0819 12:12:45.478248       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 12:12:45.482578       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 12:12:45.871703       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 12:12:48.509730       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 12:12:48.528203       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 12:12:48.546402       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 12:12:51.272848       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0819 12:12:51.523834       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0819 12:15:17.846800       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48890: use of closed network connection
	E0819 12:15:18.034092       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48898: use of closed network connection
	E0819 12:15:18.294235       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48928: use of closed network connection
	E0819 12:15:18.489646       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48944: use of closed network connection
	E0819 12:15:18.683068       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48962: use of closed network connection
	E0819 12:15:18.902889       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48966: use of closed network connection
	E0819 12:15:19.084147       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48986: use of closed network connection
	E0819 12:15:19.273632       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49006: use of closed network connection
	E0819 12:15:19.573387       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49032: use of closed network connection
	E0819 12:15:19.749854       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49048: use of closed network connection
	E0819 12:15:19.994936       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49062: use of closed network connection
	E0819 12:15:20.183769       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49084: use of closed network connection
	E0819 12:15:20.359973       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49114: use of closed network connection
	E0819 12:15:20.535070       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49124: use of closed network connection
	W0819 12:16:45.493254       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.12 192.168.39.206]
	
	
	==> kube-controller-manager [7957c6abca008ceb73562acb3a07469d02c969d7d57c6b92056e92768484729d] <==
	E0819 12:15:50.370786       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-584983-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-584983-m04"
	E0819 12:15:50.370841       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-584983-m04': failed to patch node CIDR: Node \"ha-584983-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0819 12:15:50.370866       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:50.375889       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:50.627552       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:50.639724       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-584983-m04"
	I0819 12:15:50.690363       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:51.194285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:53.047490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:53.102953       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:53.550923       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:15:53.583740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:16:00.546236       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:16:08.465628       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:16:08.465869       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-584983-m04"
	I0819 12:16:08.480501       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:16:08.566943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:16:20.931898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:17:03.591636       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m02"
	I0819 12:17:03.592083       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-584983-m04"
	I0819 12:17:03.610802       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m02"
	I0819 12:17:03.760316       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.134832ms"
	I0819 12:17:03.760412       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.069µs"
	I0819 12:17:05.691211       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m02"
	I0819 12:17:08.776298       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m02"
	
	
	==> kube-proxy [6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:12:52.545163       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 12:12:52.554518       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.12"]
	E0819 12:12:52.554600       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:12:52.597599       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 12:12:52.597654       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 12:12:52.597687       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:12:52.600184       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:12:52.600537       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:12:52.600566       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:12:52.601770       1 config.go:197] "Starting service config controller"
	I0819 12:12:52.601819       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:12:52.601848       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:12:52.601852       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:12:52.602324       1 config.go:326] "Starting node config controller"
	I0819 12:12:52.602351       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:12:52.702526       1 shared_informer.go:320] Caches are synced for node config
	I0819 12:12:52.702572       1 shared_informer.go:320] Caches are synced for service config
	I0819 12:12:52.702598       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94] <==
	W0819 12:12:43.898326       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 12:12:43.898365       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 12:12:44.779190       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 12:12:44.779336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:44.796142       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 12:12:44.796220       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:44.825714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 12:12:44.825862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:44.899414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 12:12:44.899621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:44.943911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 12:12:44.944030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:44.996827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 12:12:44.996989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:45.070635       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 12:12:45.070724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:45.161155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 12:12:45.161289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 12:12:45.489324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 12:15:14.882392       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hxh55\": pod busybox-7dff88458-hxh55 is already assigned to node \"ha-584983-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hxh55" node="ha-584983-m03"
	E0819 12:15:14.883871       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hxh55\": pod busybox-7dff88458-hxh55 is already assigned to node \"ha-584983-m03\"" pod="default/busybox-7dff88458-hxh55"
	E0819 12:15:50.520891       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fbckf\": pod kube-proxy-fbckf is already assigned to node \"ha-584983-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fbckf" node="ha-584983-m04"
	E0819 12:15:50.521012       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d49d2a08-95ae-4ebd-a52e-3d0575bfd9f5(kube-system/kube-proxy-fbckf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fbckf"
	E0819 12:15:50.521045       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fbckf\": pod kube-proxy-fbckf is already assigned to node \"ha-584983-m04\"" pod="kube-system/kube-proxy-fbckf"
	I0819 12:15:50.521064       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fbckf" node="ha-584983-m04"
	
	
	==> kubelet <==
	Aug 19 12:18:08 ha-584983 kubelet[1315]: E0819 12:18:08.558336    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069888558007351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:08 ha-584983 kubelet[1315]: E0819 12:18:08.558364    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069888558007351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:18 ha-584983 kubelet[1315]: E0819 12:18:18.561293    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069898560909410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:18 ha-584983 kubelet[1315]: E0819 12:18:18.561358    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069898560909410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:28 ha-584983 kubelet[1315]: E0819 12:18:28.566132    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069908563879326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:28 ha-584983 kubelet[1315]: E0819 12:18:28.566185    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069908563879326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:38 ha-584983 kubelet[1315]: E0819 12:18:38.567516    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069918567101073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:38 ha-584983 kubelet[1315]: E0819 12:18:38.567540    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069918567101073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:48 ha-584983 kubelet[1315]: E0819 12:18:48.454868    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 12:18:48 ha-584983 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 12:18:48 ha-584983 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 12:18:48 ha-584983 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 12:18:48 ha-584983 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 12:18:48 ha-584983 kubelet[1315]: E0819 12:18:48.569693    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069928568942365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:48 ha-584983 kubelet[1315]: E0819 12:18:48.569730    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069928568942365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:58 ha-584983 kubelet[1315]: E0819 12:18:58.571404    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069938571151342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:18:58 ha-584983 kubelet[1315]: E0819 12:18:58.571499    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069938571151342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:19:08 ha-584983 kubelet[1315]: E0819 12:19:08.574111    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069948573640417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:19:08 ha-584983 kubelet[1315]: E0819 12:19:08.574187    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069948573640417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:19:18 ha-584983 kubelet[1315]: E0819 12:19:18.577123    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069958575962200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:19:18 ha-584983 kubelet[1315]: E0819 12:19:18.577793    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069958575962200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:19:28 ha-584983 kubelet[1315]: E0819 12:19:28.579602    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069968579114725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:19:28 ha-584983 kubelet[1315]: E0819 12:19:28.580498    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069968579114725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:19:38 ha-584983 kubelet[1315]: E0819 12:19:38.584287    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069978583211053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:19:38 ha-584983 kubelet[1315]: E0819 12:19:38.585650    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069978583211053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-584983 -n ha-584983
helpers_test.go:261: (dbg) Run:  kubectl --context ha-584983 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (51.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-584983 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-584983 -v=7 --alsologtostderr
E0819 12:21:26.668626 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-584983 -v=7 --alsologtostderr: exit status 82 (2m1.917814814s)

                                                
                                                
-- stdout --
	* Stopping node "ha-584983-m04"  ...
	* Stopping node "ha-584983-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:19:46.490661 2110300 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:19:46.490787 2110300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:19:46.490797 2110300 out.go:358] Setting ErrFile to fd 2...
	I0819 12:19:46.490803 2110300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:19:46.490972 2110300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:19:46.491245 2110300 out.go:352] Setting JSON to false
	I0819 12:19:46.491366 2110300 mustload.go:65] Loading cluster: ha-584983
	I0819 12:19:46.491814 2110300 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:19:46.491943 2110300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:19:46.492233 2110300 mustload.go:65] Loading cluster: ha-584983
	I0819 12:19:46.492406 2110300 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:19:46.492467 2110300 stop.go:39] StopHost: ha-584983-m04
	I0819 12:19:46.492886 2110300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:46.492938 2110300 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:46.508555 2110300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36637
	I0819 12:19:46.509142 2110300 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:46.509765 2110300 main.go:141] libmachine: Using API Version  1
	I0819 12:19:46.509785 2110300 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:46.510159 2110300 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:46.512422 2110300 out.go:177] * Stopping node "ha-584983-m04"  ...
	I0819 12:19:46.513636 2110300 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 12:19:46.513677 2110300 main.go:141] libmachine: (ha-584983-m04) Calling .DriverName
	I0819 12:19:46.513920 2110300 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 12:19:46.513967 2110300 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHHostname
	I0819 12:19:46.517206 2110300 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:46.517778 2110300 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:36 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:19:46.517814 2110300 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:19:46.518016 2110300 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHPort
	I0819 12:19:46.518228 2110300 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHKeyPath
	I0819 12:19:46.518360 2110300 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHUsername
	I0819 12:19:46.518492 2110300 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m04/id_rsa Username:docker}
	I0819 12:19:46.603250 2110300 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 12:19:46.659041 2110300 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 12:19:46.712590 2110300 main.go:141] libmachine: Stopping "ha-584983-m04"...
	I0819 12:19:46.712619 2110300 main.go:141] libmachine: (ha-584983-m04) Calling .GetState
	I0819 12:19:46.714151 2110300 main.go:141] libmachine: (ha-584983-m04) Calling .Stop
	I0819 12:19:46.717448 2110300 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 0/120
	I0819 12:19:47.922377 2110300 main.go:141] libmachine: (ha-584983-m04) Calling .GetState
	I0819 12:19:47.923599 2110300 main.go:141] libmachine: Machine "ha-584983-m04" was stopped.
	I0819 12:19:47.923621 2110300 stop.go:75] duration metric: took 1.409989501s to stop
	I0819 12:19:47.923662 2110300 stop.go:39] StopHost: ha-584983-m03
	I0819 12:19:47.923959 2110300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:19:47.924002 2110300 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:19:47.939464 2110300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42811
	I0819 12:19:47.939946 2110300 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:19:47.940471 2110300 main.go:141] libmachine: Using API Version  1
	I0819 12:19:47.940491 2110300 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:19:47.940851 2110300 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:19:47.942936 2110300 out.go:177] * Stopping node "ha-584983-m03"  ...
	I0819 12:19:47.944241 2110300 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 12:19:47.944268 2110300 main.go:141] libmachine: (ha-584983-m03) Calling .DriverName
	I0819 12:19:47.944517 2110300 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 12:19:47.944544 2110300 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHHostname
	I0819 12:19:47.947691 2110300 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:47.948334 2110300 main.go:141] libmachine: (ha-584983-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1d:ec", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:14:15 +0000 UTC Type:0 Mac:52:54:00:5f:1d:ec Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-584983-m03 Clientid:01:52:54:00:5f:1d:ec}
	I0819 12:19:47.948377 2110300 main.go:141] libmachine: (ha-584983-m03) DBG | domain ha-584983-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:5f:1d:ec in network mk-ha-584983
	I0819 12:19:47.948523 2110300 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHPort
	I0819 12:19:47.948709 2110300 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHKeyPath
	I0819 12:19:47.948874 2110300 main.go:141] libmachine: (ha-584983-m03) Calling .GetSSHUsername
	I0819 12:19:47.949048 2110300 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m03/id_rsa Username:docker}
	I0819 12:19:48.035375 2110300 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 12:19:48.090565 2110300 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 12:19:48.148043 2110300 main.go:141] libmachine: Stopping "ha-584983-m03"...
	I0819 12:19:48.148078 2110300 main.go:141] libmachine: (ha-584983-m03) Calling .GetState
	I0819 12:19:48.149846 2110300 main.go:141] libmachine: (ha-584983-m03) Calling .Stop
	I0819 12:19:48.153479 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 0/120
	I0819 12:19:49.155004 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 1/120
	I0819 12:19:50.156411 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 2/120
	I0819 12:19:51.157597 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 3/120
	I0819 12:19:52.159092 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 4/120
	I0819 12:19:53.161213 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 5/120
	I0819 12:19:54.163234 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 6/120
	I0819 12:19:55.165153 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 7/120
	I0819 12:19:56.167555 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 8/120
	I0819 12:19:57.169337 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 9/120
	I0819 12:19:58.171491 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 10/120
	I0819 12:19:59.173115 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 11/120
	I0819 12:20:00.175094 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 12/120
	I0819 12:20:01.176600 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 13/120
	I0819 12:20:02.177937 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 14/120
	I0819 12:20:03.179529 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 15/120
	I0819 12:20:04.181014 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 16/120
	I0819 12:20:05.182609 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 17/120
	I0819 12:20:06.184145 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 18/120
	I0819 12:20:07.185619 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 19/120
	I0819 12:20:08.187477 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 20/120
	I0819 12:20:09.189025 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 21/120
	I0819 12:20:10.190731 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 22/120
	I0819 12:20:11.192082 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 23/120
	I0819 12:20:12.193414 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 24/120
	I0819 12:20:13.195272 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 25/120
	I0819 12:20:14.196934 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 26/120
	I0819 12:20:15.198806 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 27/120
	I0819 12:20:16.200267 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 28/120
	I0819 12:20:17.201713 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 29/120
	I0819 12:20:18.203134 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 30/120
	I0819 12:20:19.205038 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 31/120
	I0819 12:20:20.206477 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 32/120
	I0819 12:20:21.207990 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 33/120
	I0819 12:20:22.209356 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 34/120
	I0819 12:20:23.211034 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 35/120
	I0819 12:20:24.212520 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 36/120
	I0819 12:20:25.213942 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 37/120
	I0819 12:20:26.215322 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 38/120
	I0819 12:20:27.216682 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 39/120
	I0819 12:20:28.218480 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 40/120
	I0819 12:20:29.219943 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 41/120
	I0819 12:20:30.221412 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 42/120
	I0819 12:20:31.222778 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 43/120
	I0819 12:20:32.224219 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 44/120
	I0819 12:20:33.226097 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 45/120
	I0819 12:20:34.227516 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 46/120
	I0819 12:20:35.228951 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 47/120
	I0819 12:20:36.230840 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 48/120
	I0819 12:20:37.232353 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 49/120
	I0819 12:20:38.234252 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 50/120
	I0819 12:20:39.235641 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 51/120
	I0819 12:20:40.237115 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 52/120
	I0819 12:20:41.238392 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 53/120
	I0819 12:20:42.239961 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 54/120
	I0819 12:20:43.241764 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 55/120
	I0819 12:20:44.243338 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 56/120
	I0819 12:20:45.244675 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 57/120
	I0819 12:20:46.246235 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 58/120
	I0819 12:20:47.247500 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 59/120
	I0819 12:20:48.248996 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 60/120
	I0819 12:20:49.250340 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 61/120
	I0819 12:20:50.251801 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 62/120
	I0819 12:20:51.253205 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 63/120
	I0819 12:20:52.254654 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 64/120
	I0819 12:20:53.256814 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 65/120
	I0819 12:20:54.258198 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 66/120
	I0819 12:20:55.259668 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 67/120
	I0819 12:20:56.261062 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 68/120
	I0819 12:20:57.262580 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 69/120
	I0819 12:20:58.264418 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 70/120
	I0819 12:20:59.265767 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 71/120
	I0819 12:21:00.267129 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 72/120
	I0819 12:21:01.268746 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 73/120
	I0819 12:21:02.270186 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 74/120
	I0819 12:21:03.271544 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 75/120
	I0819 12:21:04.272834 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 76/120
	I0819 12:21:05.275097 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 77/120
	I0819 12:21:06.276407 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 78/120
	I0819 12:21:07.277699 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 79/120
	I0819 12:21:08.279582 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 80/120
	I0819 12:21:09.281740 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 81/120
	I0819 12:21:10.283448 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 82/120
	I0819 12:21:11.285055 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 83/120
	I0819 12:21:12.286499 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 84/120
	I0819 12:21:13.288429 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 85/120
	I0819 12:21:14.290714 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 86/120
	I0819 12:21:15.292137 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 87/120
	I0819 12:21:16.293559 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 88/120
	I0819 12:21:17.294945 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 89/120
	I0819 12:21:18.296975 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 90/120
	I0819 12:21:19.298458 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 91/120
	I0819 12:21:20.299860 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 92/120
	I0819 12:21:21.301330 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 93/120
	I0819 12:21:22.302762 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 94/120
	I0819 12:21:23.304606 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 95/120
	I0819 12:21:24.306074 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 96/120
	I0819 12:21:25.307389 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 97/120
	I0819 12:21:26.308654 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 98/120
	I0819 12:21:27.310547 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 99/120
	I0819 12:21:28.312105 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 100/120
	I0819 12:21:29.313326 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 101/120
	I0819 12:21:30.314453 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 102/120
	I0819 12:21:31.316090 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 103/120
	I0819 12:21:32.317324 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 104/120
	I0819 12:21:33.319245 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 105/120
	I0819 12:21:34.320503 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 106/120
	I0819 12:21:35.322713 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 107/120
	I0819 12:21:36.323963 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 108/120
	I0819 12:21:37.325352 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 109/120
	I0819 12:21:38.327163 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 110/120
	I0819 12:21:39.328709 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 111/120
	I0819 12:21:40.330009 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 112/120
	I0819 12:21:41.331411 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 113/120
	I0819 12:21:42.332848 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 114/120
	I0819 12:21:43.334613 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 115/120
	I0819 12:21:44.336134 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 116/120
	I0819 12:21:45.337566 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 117/120
	I0819 12:21:46.339001 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 118/120
	I0819 12:21:47.340580 2110300 main.go:141] libmachine: (ha-584983-m03) Waiting for machine to stop 119/120
	I0819 12:21:48.341400 2110300 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 12:21:48.341469 2110300 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 12:21:48.343298 2110300 out.go:201] 
	W0819 12:21:48.344690 2110300 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 12:21:48.344709 2110300 out.go:270] * 
	* 
	W0819 12:21:48.361568 2110300 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 12:21:48.362877 2110300 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-584983 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-584983 --wait=true -v=7 --alsologtostderr
E0819 12:21:54.372279 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:24:02.189594 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:25:25.256609 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-584983 --wait=true -v=7 --alsologtostderr: (4m14.059590234s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-584983
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-584983 -n ha-584983
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-584983 logs -n 25: (1.628027289s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-584983 cp ha-584983-m03:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m02:/home/docker/cp-test_ha-584983-m03_ha-584983-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983-m02 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m03_ha-584983-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m03:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04:/home/docker/cp-test_ha-584983-m03_ha-584983-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983-m04 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m03_ha-584983-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-584983 cp testdata/cp-test.txt                                                | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3290211963/001/cp-test_ha-584983-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983:/home/docker/cp-test_ha-584983-m04_ha-584983.txt                       |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983 sudo cat                                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m04_ha-584983.txt                                 |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m02:/home/docker/cp-test_ha-584983-m04_ha-584983-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983-m02 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m04_ha-584983-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m03:/home/docker/cp-test_ha-584983-m04_ha-584983-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983-m03 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m04_ha-584983-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-584983 node stop m02 -v=7                                                     | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-584983 node start m02 -v=7                                                    | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-584983 -v=7                                                           | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-584983 -v=7                                                                | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-584983 --wait=true -v=7                                                    | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:26 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-584983                                                                | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:26 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:21:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:21:48.413736 2110774 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:21:48.414056 2110774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:21:48.414067 2110774 out.go:358] Setting ErrFile to fd 2...
	I0819 12:21:48.414072 2110774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:21:48.414272 2110774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:21:48.414923 2110774 out.go:352] Setting JSON to false
	I0819 12:21:48.416136 2110774 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":162258,"bootTime":1723907850,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:21:48.416228 2110774 start.go:139] virtualization: kvm guest
	I0819 12:21:48.418484 2110774 out.go:177] * [ha-584983] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:21:48.419987 2110774 notify.go:220] Checking for updates...
	I0819 12:21:48.419996 2110774 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 12:21:48.421030 2110774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:21:48.422057 2110774 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:21:48.423341 2110774 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:21:48.424484 2110774 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:21:48.425609 2110774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:21:48.427108 2110774 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:21:48.427300 2110774 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:21:48.427746 2110774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:21:48.427825 2110774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:21:48.443601 2110774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40337
	I0819 12:21:48.444043 2110774 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:21:48.444697 2110774 main.go:141] libmachine: Using API Version  1
	I0819 12:21:48.444727 2110774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:21:48.445163 2110774 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:21:48.445376 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:21:48.481575 2110774 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 12:21:48.482670 2110774 start.go:297] selected driver: kvm2
	I0819 12:21:48.482691 2110774 start.go:901] validating driver "kvm2" against &{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:21:48.482882 2110774 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:21:48.483344 2110774 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:21:48.483433 2110774 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 12:21:48.498920 2110774 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 12:21:48.499653 2110774 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:21:48.499737 2110774 cni.go:84] Creating CNI manager for ""
	I0819 12:21:48.499752 2110774 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 12:21:48.499819 2110774 start.go:340] cluster config:
	{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:21:48.500021 2110774 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:21:48.501556 2110774 out.go:177] * Starting "ha-584983" primary control-plane node in "ha-584983" cluster
	I0819 12:21:48.502552 2110774 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:21:48.502596 2110774 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 12:21:48.502607 2110774 cache.go:56] Caching tarball of preloaded images
	I0819 12:21:48.502686 2110774 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:21:48.502703 2110774 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 12:21:48.502832 2110774 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:21:48.503020 2110774 start.go:360] acquireMachinesLock for ha-584983: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:21:48.503059 2110774 start.go:364] duration metric: took 22.375µs to acquireMachinesLock for "ha-584983"
	I0819 12:21:48.503074 2110774 start.go:96] Skipping create...Using existing machine configuration
	I0819 12:21:48.503082 2110774 fix.go:54] fixHost starting: 
	I0819 12:21:48.503350 2110774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:21:48.503382 2110774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:21:48.517573 2110774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33509
	I0819 12:21:48.518003 2110774 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:21:48.518486 2110774 main.go:141] libmachine: Using API Version  1
	I0819 12:21:48.518512 2110774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:21:48.518859 2110774 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:21:48.519064 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:21:48.519258 2110774 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:21:48.520776 2110774 fix.go:112] recreateIfNeeded on ha-584983: state=Running err=<nil>
	W0819 12:21:48.520811 2110774 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 12:21:48.522441 2110774 out.go:177] * Updating the running kvm2 "ha-584983" VM ...
	I0819 12:21:48.523421 2110774 machine.go:93] provisionDockerMachine start ...
	I0819 12:21:48.523438 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:21:48.523642 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:21:48.526208 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.526627 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:21:48.526660 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.526731 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:21:48.526899 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:48.527032 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:48.527169 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:21:48.527334 2110774 main.go:141] libmachine: Using SSH client type: native
	I0819 12:21:48.527586 2110774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:21:48.527602 2110774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 12:21:48.646934 2110774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-584983
	
	I0819 12:21:48.646966 2110774 main.go:141] libmachine: (ha-584983) Calling .GetMachineName
	I0819 12:21:48.647222 2110774 buildroot.go:166] provisioning hostname "ha-584983"
	I0819 12:21:48.647252 2110774 main.go:141] libmachine: (ha-584983) Calling .GetMachineName
	I0819 12:21:48.647433 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:21:48.650070 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.650582 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:21:48.650617 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.650842 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:21:48.651047 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:48.651227 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:48.651380 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:21:48.651589 2110774 main.go:141] libmachine: Using SSH client type: native
	I0819 12:21:48.651872 2110774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:21:48.651889 2110774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-584983 && echo "ha-584983" | sudo tee /etc/hostname
	I0819 12:21:48.783591 2110774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-584983
	
	I0819 12:21:48.783621 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:21:48.786618 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.787045 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:21:48.787065 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.787283 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:21:48.787494 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:48.787693 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:48.787870 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:21:48.788024 2110774 main.go:141] libmachine: Using SSH client type: native
	I0819 12:21:48.788263 2110774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:21:48.788285 2110774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-584983' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-584983/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-584983' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:21:48.906743 2110774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:21:48.906779 2110774 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 12:21:48.906801 2110774 buildroot.go:174] setting up certificates
	I0819 12:21:48.906811 2110774 provision.go:84] configureAuth start
	I0819 12:21:48.906823 2110774 main.go:141] libmachine: (ha-584983) Calling .GetMachineName
	I0819 12:21:48.907121 2110774 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:21:48.909956 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.910351 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:21:48.910372 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.910573 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:21:48.913025 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.913416 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:21:48.913442 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.913589 2110774 provision.go:143] copyHostCerts
	I0819 12:21:48.913640 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:21:48.913672 2110774 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 12:21:48.913688 2110774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:21:48.913763 2110774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 12:21:48.913832 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:21:48.913850 2110774 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 12:21:48.913857 2110774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:21:48.913879 2110774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 12:21:48.913917 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:21:48.913933 2110774 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 12:21:48.913939 2110774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:21:48.913960 2110774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 12:21:48.914003 2110774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.ha-584983 san=[127.0.0.1 192.168.39.12 ha-584983 localhost minikube]
	I0819 12:21:49.255336 2110774 provision.go:177] copyRemoteCerts
	I0819 12:21:49.255400 2110774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:21:49.255457 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:21:49.258102 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:49.258441 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:21:49.258469 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:49.258729 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:21:49.258891 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:49.259043 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:21:49.259172 2110774 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:21:49.347714 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 12:21:49.347805 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 12:21:49.375175 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 12:21:49.375246 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0819 12:21:49.402760 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 12:21:49.402839 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 12:21:49.429960 2110774 provision.go:87] duration metric: took 523.133813ms to configureAuth
	I0819 12:21:49.429993 2110774 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:21:49.430259 2110774 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:21:49.430379 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:21:49.433378 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:49.433800 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:21:49.433826 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:49.434051 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:21:49.434278 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:49.434409 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:49.434590 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:21:49.434769 2110774 main.go:141] libmachine: Using SSH client type: native
	I0819 12:21:49.434949 2110774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:21:49.434967 2110774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:23:20.279355 2110774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:23:20.279390 2110774 machine.go:96] duration metric: took 1m31.755955533s to provisionDockerMachine
	I0819 12:23:20.279406 2110774 start.go:293] postStartSetup for "ha-584983" (driver="kvm2")
	I0819 12:23:20.279418 2110774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:23:20.279457 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:23:20.279813 2110774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:23:20.279850 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:23:20.283332 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.283911 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:23:20.283938 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.284179 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:23:20.284384 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:23:20.284578 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:23:20.284710 2110774 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:23:20.373423 2110774 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:23:20.377788 2110774 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:23:20.377811 2110774 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 12:23:20.377874 2110774 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 12:23:20.377948 2110774 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 12:23:20.377959 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /etc/ssl/certs/20941932.pem
	I0819 12:23:20.378046 2110774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:23:20.388898 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:23:20.414554 2110774 start.go:296] duration metric: took 135.134148ms for postStartSetup
	I0819 12:23:20.414600 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:23:20.414903 2110774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 12:23:20.414929 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:23:20.417809 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.418183 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:23:20.418207 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.418351 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:23:20.418550 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:23:20.418692 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:23:20.418848 2110774 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	W0819 12:23:20.502212 2110774 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0819 12:23:20.502239 2110774 fix.go:56] duration metric: took 1m31.999156534s for fixHost
	I0819 12:23:20.502268 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:23:20.505115 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.505531 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:23:20.505565 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.505731 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:23:20.505923 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:23:20.506083 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:23:20.506227 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:23:20.506410 2110774 main.go:141] libmachine: Using SSH client type: native
	I0819 12:23:20.506586 2110774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:23:20.506597 2110774 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:23:20.616811 2110774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724070200.585768225
	
	I0819 12:23:20.616837 2110774 fix.go:216] guest clock: 1724070200.585768225
	I0819 12:23:20.616845 2110774 fix.go:229] Guest: 2024-08-19 12:23:20.585768225 +0000 UTC Remote: 2024-08-19 12:23:20.50224815 +0000 UTC m=+92.127849537 (delta=83.520075ms)
	I0819 12:23:20.616883 2110774 fix.go:200] guest clock delta is within tolerance: 83.520075ms
	I0819 12:23:20.616890 2110774 start.go:83] releasing machines lock for "ha-584983", held for 1m32.113822129s
	I0819 12:23:20.616917 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:23:20.617172 2110774 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:23:20.619827 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.620186 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:23:20.620216 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.620359 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:23:20.620820 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:23:20.621003 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:23:20.621088 2110774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:23:20.621150 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:23:20.621209 2110774 ssh_runner.go:195] Run: cat /version.json
	I0819 12:23:20.621235 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:23:20.623943 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.623970 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.624406 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:23:20.624432 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:23:20.624452 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.624475 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.624643 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:23:20.624841 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:23:20.624854 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:23:20.624995 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:23:20.625004 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:23:20.625142 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:23:20.625172 2110774 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:23:20.625257 2110774 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:23:20.730251 2110774 ssh_runner.go:195] Run: systemctl --version
	I0819 12:23:20.736480 2110774 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:23:20.903595 2110774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:23:20.909845 2110774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:23:20.909928 2110774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:23:20.919906 2110774 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 12:23:20.919934 2110774 start.go:495] detecting cgroup driver to use...
	I0819 12:23:20.920008 2110774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:23:20.936656 2110774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:23:20.950921 2110774 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:23:20.950986 2110774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:23:20.965388 2110774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:23:20.978783 2110774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:23:21.125000 2110774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:23:21.271111 2110774 docker.go:233] disabling docker service ...
	I0819 12:23:21.271186 2110774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:23:21.288073 2110774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:23:21.301485 2110774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:23:21.445452 2110774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:23:21.588883 2110774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:23:21.602854 2110774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:23:21.620929 2110774 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:23:21.621010 2110774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:23:21.632214 2110774 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:23:21.632275 2110774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:23:21.642344 2110774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:23:21.652583 2110774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:23:21.663326 2110774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:23:21.673878 2110774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:23:21.685299 2110774 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:23:21.695413 2110774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:23:21.704975 2110774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:23:21.713964 2110774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:23:21.723170 2110774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:23:21.864657 2110774 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:23:28.906500 2110774 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.041792229s)
	I0819 12:23:28.906547 2110774 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:23:28.906606 2110774 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:23:28.912183 2110774 start.go:563] Will wait 60s for crictl version
	I0819 12:23:28.912253 2110774 ssh_runner.go:195] Run: which crictl
	I0819 12:23:28.916331 2110774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:23:28.954382 2110774 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:23:28.954468 2110774 ssh_runner.go:195] Run: crio --version
	I0819 12:23:28.983584 2110774 ssh_runner.go:195] Run: crio --version
	I0819 12:23:29.013592 2110774 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:23:29.014771 2110774 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:23:29.017448 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:29.017844 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:23:29.017874 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:29.018084 2110774 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:23:29.023014 2110774 kubeadm.go:883] updating cluster {Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:23:29.023164 2110774 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:23:29.023214 2110774 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:23:29.070028 2110774 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:23:29.070056 2110774 crio.go:433] Images already preloaded, skipping extraction
	I0819 12:23:29.070120 2110774 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:23:29.112599 2110774 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:23:29.112627 2110774 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:23:29.112640 2110774 kubeadm.go:934] updating node { 192.168.39.12 8443 v1.31.0 crio true true} ...
	I0819 12:23:29.112778 2110774 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-584983 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:23:29.112862 2110774 ssh_runner.go:195] Run: crio config
	I0819 12:23:29.159105 2110774 cni.go:84] Creating CNI manager for ""
	I0819 12:23:29.159127 2110774 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 12:23:29.159146 2110774 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:23:29.159179 2110774 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-584983 NodeName:ha-584983 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:23:29.159367 2110774 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-584983"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:23:29.159392 2110774 kube-vip.go:115] generating kube-vip config ...
	I0819 12:23:29.159450 2110774 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 12:23:29.171176 2110774 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 12:23:29.171297 2110774 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 12:23:29.171367 2110774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:23:29.180930 2110774 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:23:29.181058 2110774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 12:23:29.190220 2110774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0819 12:23:29.206368 2110774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:23:29.222288 2110774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0819 12:23:29.238152 2110774 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 12:23:29.254848 2110774 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 12:23:29.258515 2110774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:23:29.405862 2110774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:23:29.421009 2110774 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983 for IP: 192.168.39.12
	I0819 12:23:29.421036 2110774 certs.go:194] generating shared ca certs ...
	I0819 12:23:29.421053 2110774 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:23:29.421220 2110774 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 12:23:29.421259 2110774 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 12:23:29.421268 2110774 certs.go:256] generating profile certs ...
	I0819 12:23:29.421336 2110774 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key
	I0819 12:23:29.421361 2110774 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e2964ab4
	I0819 12:23:29.421389 2110774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e2964ab4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.84 192.168.39.206 192.168.39.254]
	I0819 12:23:29.677970 2110774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e2964ab4 ...
	I0819 12:23:29.678005 2110774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e2964ab4: {Name:mk7fd50c9344c6f7fcffebee0d5aa98b656971cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:23:29.678184 2110774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e2964ab4 ...
	I0819 12:23:29.678195 2110774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e2964ab4: {Name:mk7ab8578bd0d4f5cca191ddcdc37726b8725793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:23:29.678267 2110774 certs.go:381] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e2964ab4 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt
	I0819 12:23:29.678408 2110774 certs.go:385] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e2964ab4 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key
	I0819 12:23:29.678544 2110774 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key
	I0819 12:23:29.678562 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 12:23:29.678575 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 12:23:29.678589 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 12:23:29.678602 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 12:23:29.678614 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 12:23:29.678629 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 12:23:29.678643 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 12:23:29.678655 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 12:23:29.678708 2110774 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 12:23:29.678741 2110774 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 12:23:29.678751 2110774 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:23:29.678773 2110774 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 12:23:29.678795 2110774 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:23:29.678819 2110774 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 12:23:29.678858 2110774 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:23:29.678895 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /usr/share/ca-certificates/20941932.pem
	I0819 12:23:29.678909 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:23:29.678922 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem -> /usr/share/ca-certificates/2094193.pem
	I0819 12:23:29.679579 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:23:29.705701 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:23:29.728689 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:23:29.751471 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 12:23:29.774201 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 12:23:29.797903 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 12:23:29.821550 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:23:29.845338 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:23:29.868882 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 12:23:29.891762 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:23:29.914153 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 12:23:29.937347 2110774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:23:29.953746 2110774 ssh_runner.go:195] Run: openssl version
	I0819 12:23:29.959436 2110774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 12:23:29.969716 2110774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 12:23:29.973964 2110774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 12:23:29.974008 2110774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 12:23:29.979511 2110774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:23:29.988441 2110774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:23:29.998936 2110774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:23:30.003193 2110774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:23:30.003250 2110774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:23:30.008770 2110774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:23:30.017716 2110774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 12:23:30.027834 2110774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 12:23:30.031863 2110774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 12:23:30.031916 2110774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 12:23:30.037260 2110774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 12:23:30.045897 2110774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:23:30.050103 2110774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 12:23:30.055388 2110774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 12:23:30.060961 2110774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 12:23:30.066380 2110774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 12:23:30.072044 2110774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 12:23:30.077442 2110774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 12:23:30.083074 2110774 kubeadm.go:392] StartCluster: {Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:23:30.083206 2110774 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:23:30.083258 2110774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:23:30.120858 2110774 cri.go:89] found id: "a1594f4dc78b05f91ea7e36c9c3e0b220cde5d2087cd6e925752c2dc8eba3344"
	I0819 12:23:30.120883 2110774 cri.go:89] found id: "710ed8ebd42cd9f5091e2d029f61807ee84573aca51a8fec83b53682c60bec5d"
	I0819 12:23:30.120886 2110774 cri.go:89] found id: "567f42992a943a79c52c61dd56da3444360dd907a47faf5b97f7dd88dd801b5c"
	I0819 12:23:30.120889 2110774 cri.go:89] found id: "8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43"
	I0819 12:23:30.120891 2110774 cri.go:89] found id: "214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577"
	I0819 12:23:30.120894 2110774 cri.go:89] found id: "c6a144ce90fec4a3b9641d2b329d74eea7d24853fc8e12a26ce60dccdd8cd534"
	I0819 12:23:30.120897 2110774 cri.go:89] found id: "668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb"
	I0819 12:23:30.120899 2110774 cri.go:89] found id: "6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0"
	I0819 12:23:30.120901 2110774 cri.go:89] found id: "993c24f05aeae1e90f7f0d7497ad21ef61f5591c51c27f0b57cb784b5c205744"
	I0819 12:23:30.120908 2110774 cri.go:89] found id: "799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94"
	I0819 12:23:30.120910 2110774 cri.go:89] found id: "65afb8623fcea98f124cd8de3fb6f430c09728b1f5a6ec5fa0ce329353270857"
	I0819 12:23:30.120913 2110774 cri.go:89] found id: "7957c6abca008ceb73562acb3a07469d02c969d7d57c6b92056e92768484729d"
	I0819 12:23:30.120915 2110774 cri.go:89] found id: "e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7"
	I0819 12:23:30.120918 2110774 cri.go:89] found id: ""
	I0819 12:23:30.120962 2110774 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.088785083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070363088760013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2c09425-9e6a-480f-8afa-539e74028335 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.089221668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1b84202-4ac6-486b-9c72-8f49f8ec3382 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.089290236Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1b84202-4ac6-486b-9c72-8f49f8ec3382 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.089771994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:866e8aa44d27a0646aefc6952dbaeac49d6e89538e1aba9bf30ee9e2a9ef517d,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724070299454022856,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b657589fb9d48765eb945154358e1a9fe0c2112fc0647102f121444c4e58be,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070258454335390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27a1380c1eae06afae268f383c814fcbe5f20738383d5ada071203cc535dcc9,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070254460608669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34311025d9fbfa21d71de6e44283647efd3710d39935b85205a4b0662e34017c,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724070250447489261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aff679da8d7ddc2b9edbf72de5eb28a50f319f52523b10bbe669344accacc24,PodSandboxId:cf3b96668670c79772b4c68eb411f61ad7064d2b8f08509e28bc5afd780b3ed6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724070249788074300,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807fbcd627afb16ada9d5d4593d996f49bf9ea0551deb5484c54f23b39eb6c12,PodSandboxId:b874caea6ef56c1711f28b1b8c84b514e779b29024474c964fc005c6a5066a10,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724070230064920055,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd1623c583ed76f894dba28dc6cc6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde389f70f0915580e61352a0435612ace8bd67c298eabc63daa4683a32cd430,PodSandboxId:7c17fbf7fe5ee853199e0dc9908a0e50a2bf15fc22e4537edc5ed196745633a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070216537229055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ed810ef421941efecdfa09118a5ac05c7340097cc3ece3e9cb5c8ff0bc78645b,PodSandboxId:7309ad62e1aef3fa6ff73e1c29dd0947d276a50ea34b94ce256fc848442ad687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724070216835525697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07587b99
2d4dd88c10dbf666987b6fd647ecdcddabd89c95f13daf5e6108fdf,PodSandboxId:fcd51d500dbfa9ad10809dc792763a94de9a2008273183fc927f8464216cd9dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216624782748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edd88cca052e9928e943781e7ce66a36df09c2c405816c5a70428dd93e8bc38,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070216418406358,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e61168d93d3e839804c2d5ae9154af1a72c022e0e39ff8db616ac0b7e70ce3,PodSandboxId:cb3e15ff66a8b914044d28bbd9e5b99e563695912c49eb94bbff887927c4ab3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070216478513524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1f3659ab0dcde5fa06045786ae82f4ec6748a5d8657bb0cbcbbc9187ba1f2a,PodSandboxId:e877fe1138e168e5b43f2e837bab63a0a0b20271a58f8d4f664fa43440aa3506,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216395929739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98df8dc7e54d9e2cff0abca07f0590d77288a28af781674b29c81ecb374d0ec6,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070216310935583,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496
bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775313d14a7d28cd862d189b963cabab90bcea02d9da5f1ae2038420f02b2dc,PodSandboxId:5434a8cdacbb0f5d3b7103d56ab6b2d36450988b9cb155d5716c75e169cd6cc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070216198364459,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724069716321711862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586380327009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586332043521,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724069574416895537,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724069572336626960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724069560632227223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724069560496499103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1b84202-4ac6-486b-9c72-8f49f8ec3382 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.130632503Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a00be272-c15f-4c63-b134-42a1057a9085 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.130717876Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a00be272-c15f-4c63-b134-42a1057a9085 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.131647261Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d67964dc-c9c6-48e4-820c-d1c712c1ffae name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.132090233Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070363132070276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d67964dc-c9c6-48e4-820c-d1c712c1ffae name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.132719575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d43e72e-7b6f-40bb-a419-aeae07b1f9f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.132790030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d43e72e-7b6f-40bb-a419-aeae07b1f9f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.133203146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:866e8aa44d27a0646aefc6952dbaeac49d6e89538e1aba9bf30ee9e2a9ef517d,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724070299454022856,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b657589fb9d48765eb945154358e1a9fe0c2112fc0647102f121444c4e58be,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070258454335390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27a1380c1eae06afae268f383c814fcbe5f20738383d5ada071203cc535dcc9,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070254460608669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34311025d9fbfa21d71de6e44283647efd3710d39935b85205a4b0662e34017c,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724070250447489261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aff679da8d7ddc2b9edbf72de5eb28a50f319f52523b10bbe669344accacc24,PodSandboxId:cf3b96668670c79772b4c68eb411f61ad7064d2b8f08509e28bc5afd780b3ed6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724070249788074300,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807fbcd627afb16ada9d5d4593d996f49bf9ea0551deb5484c54f23b39eb6c12,PodSandboxId:b874caea6ef56c1711f28b1b8c84b514e779b29024474c964fc005c6a5066a10,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724070230064920055,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd1623c583ed76f894dba28dc6cc6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde389f70f0915580e61352a0435612ace8bd67c298eabc63daa4683a32cd430,PodSandboxId:7c17fbf7fe5ee853199e0dc9908a0e50a2bf15fc22e4537edc5ed196745633a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070216537229055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ed810ef421941efecdfa09118a5ac05c7340097cc3ece3e9cb5c8ff0bc78645b,PodSandboxId:7309ad62e1aef3fa6ff73e1c29dd0947d276a50ea34b94ce256fc848442ad687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724070216835525697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07587b99
2d4dd88c10dbf666987b6fd647ecdcddabd89c95f13daf5e6108fdf,PodSandboxId:fcd51d500dbfa9ad10809dc792763a94de9a2008273183fc927f8464216cd9dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216624782748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edd88cca052e9928e943781e7ce66a36df09c2c405816c5a70428dd93e8bc38,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070216418406358,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e61168d93d3e839804c2d5ae9154af1a72c022e0e39ff8db616ac0b7e70ce3,PodSandboxId:cb3e15ff66a8b914044d28bbd9e5b99e563695912c49eb94bbff887927c4ab3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070216478513524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1f3659ab0dcde5fa06045786ae82f4ec6748a5d8657bb0cbcbbc9187ba1f2a,PodSandboxId:e877fe1138e168e5b43f2e837bab63a0a0b20271a58f8d4f664fa43440aa3506,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216395929739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98df8dc7e54d9e2cff0abca07f0590d77288a28af781674b29c81ecb374d0ec6,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070216310935583,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496
bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775313d14a7d28cd862d189b963cabab90bcea02d9da5f1ae2038420f02b2dc,PodSandboxId:5434a8cdacbb0f5d3b7103d56ab6b2d36450988b9cb155d5716c75e169cd6cc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070216198364459,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724069716321711862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586380327009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586332043521,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724069574416895537,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724069572336626960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724069560632227223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724069560496499103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d43e72e-7b6f-40bb-a419-aeae07b1f9f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.176102454Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=abfea492-41b5-4ad2-974e-dd82ad2c3fd8 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.176193712Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=abfea492-41b5-4ad2-974e-dd82ad2c3fd8 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.177583665Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab48b93d-1185-4e3d-86e7-af94ee15eecb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.178043511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070363178021931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab48b93d-1185-4e3d-86e7-af94ee15eecb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.178533552Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04fafb81-8292-412e-b1dc-37437b797770 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.178614152Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04fafb81-8292-412e-b1dc-37437b797770 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.179027498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:866e8aa44d27a0646aefc6952dbaeac49d6e89538e1aba9bf30ee9e2a9ef517d,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724070299454022856,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b657589fb9d48765eb945154358e1a9fe0c2112fc0647102f121444c4e58be,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070258454335390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27a1380c1eae06afae268f383c814fcbe5f20738383d5ada071203cc535dcc9,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070254460608669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34311025d9fbfa21d71de6e44283647efd3710d39935b85205a4b0662e34017c,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724070250447489261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aff679da8d7ddc2b9edbf72de5eb28a50f319f52523b10bbe669344accacc24,PodSandboxId:cf3b96668670c79772b4c68eb411f61ad7064d2b8f08509e28bc5afd780b3ed6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724070249788074300,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807fbcd627afb16ada9d5d4593d996f49bf9ea0551deb5484c54f23b39eb6c12,PodSandboxId:b874caea6ef56c1711f28b1b8c84b514e779b29024474c964fc005c6a5066a10,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724070230064920055,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd1623c583ed76f894dba28dc6cc6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde389f70f0915580e61352a0435612ace8bd67c298eabc63daa4683a32cd430,PodSandboxId:7c17fbf7fe5ee853199e0dc9908a0e50a2bf15fc22e4537edc5ed196745633a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070216537229055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ed810ef421941efecdfa09118a5ac05c7340097cc3ece3e9cb5c8ff0bc78645b,PodSandboxId:7309ad62e1aef3fa6ff73e1c29dd0947d276a50ea34b94ce256fc848442ad687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724070216835525697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07587b99
2d4dd88c10dbf666987b6fd647ecdcddabd89c95f13daf5e6108fdf,PodSandboxId:fcd51d500dbfa9ad10809dc792763a94de9a2008273183fc927f8464216cd9dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216624782748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edd88cca052e9928e943781e7ce66a36df09c2c405816c5a70428dd93e8bc38,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070216418406358,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e61168d93d3e839804c2d5ae9154af1a72c022e0e39ff8db616ac0b7e70ce3,PodSandboxId:cb3e15ff66a8b914044d28bbd9e5b99e563695912c49eb94bbff887927c4ab3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070216478513524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1f3659ab0dcde5fa06045786ae82f4ec6748a5d8657bb0cbcbbc9187ba1f2a,PodSandboxId:e877fe1138e168e5b43f2e837bab63a0a0b20271a58f8d4f664fa43440aa3506,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216395929739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98df8dc7e54d9e2cff0abca07f0590d77288a28af781674b29c81ecb374d0ec6,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070216310935583,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496
bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775313d14a7d28cd862d189b963cabab90bcea02d9da5f1ae2038420f02b2dc,PodSandboxId:5434a8cdacbb0f5d3b7103d56ab6b2d36450988b9cb155d5716c75e169cd6cc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070216198364459,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724069716321711862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586380327009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586332043521,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724069574416895537,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724069572336626960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724069560632227223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724069560496499103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04fafb81-8292-412e-b1dc-37437b797770 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.226606483Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5e6bfca-5f51-4267-ac6a-f6848e28e0d7 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.226720210Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5e6bfca-5f51-4267-ac6a-f6848e28e0d7 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.227690923Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a49a207d-8b50-410b-8191-f8e023f9de73 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.228336599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070363228313224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a49a207d-8b50-410b-8191-f8e023f9de73 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.228802277Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76b8d925-8af0-4bcf-91af-d8c2cd52df40 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.228887457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76b8d925-8af0-4bcf-91af-d8c2cd52df40 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:26:03 ha-584983 crio[3578]: time="2024-08-19 12:26:03.229287829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:866e8aa44d27a0646aefc6952dbaeac49d6e89538e1aba9bf30ee9e2a9ef517d,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724070299454022856,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b657589fb9d48765eb945154358e1a9fe0c2112fc0647102f121444c4e58be,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070258454335390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27a1380c1eae06afae268f383c814fcbe5f20738383d5ada071203cc535dcc9,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070254460608669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34311025d9fbfa21d71de6e44283647efd3710d39935b85205a4b0662e34017c,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724070250447489261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aff679da8d7ddc2b9edbf72de5eb28a50f319f52523b10bbe669344accacc24,PodSandboxId:cf3b96668670c79772b4c68eb411f61ad7064d2b8f08509e28bc5afd780b3ed6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724070249788074300,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807fbcd627afb16ada9d5d4593d996f49bf9ea0551deb5484c54f23b39eb6c12,PodSandboxId:b874caea6ef56c1711f28b1b8c84b514e779b29024474c964fc005c6a5066a10,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724070230064920055,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd1623c583ed76f894dba28dc6cc6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde389f70f0915580e61352a0435612ace8bd67c298eabc63daa4683a32cd430,PodSandboxId:7c17fbf7fe5ee853199e0dc9908a0e50a2bf15fc22e4537edc5ed196745633a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070216537229055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ed810ef421941efecdfa09118a5ac05c7340097cc3ece3e9cb5c8ff0bc78645b,PodSandboxId:7309ad62e1aef3fa6ff73e1c29dd0947d276a50ea34b94ce256fc848442ad687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724070216835525697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07587b99
2d4dd88c10dbf666987b6fd647ecdcddabd89c95f13daf5e6108fdf,PodSandboxId:fcd51d500dbfa9ad10809dc792763a94de9a2008273183fc927f8464216cd9dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216624782748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edd88cca052e9928e943781e7ce66a36df09c2c405816c5a70428dd93e8bc38,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070216418406358,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e61168d93d3e839804c2d5ae9154af1a72c022e0e39ff8db616ac0b7e70ce3,PodSandboxId:cb3e15ff66a8b914044d28bbd9e5b99e563695912c49eb94bbff887927c4ab3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070216478513524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1f3659ab0dcde5fa06045786ae82f4ec6748a5d8657bb0cbcbbc9187ba1f2a,PodSandboxId:e877fe1138e168e5b43f2e837bab63a0a0b20271a58f8d4f664fa43440aa3506,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216395929739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98df8dc7e54d9e2cff0abca07f0590d77288a28af781674b29c81ecb374d0ec6,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070216310935583,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496
bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775313d14a7d28cd862d189b963cabab90bcea02d9da5f1ae2038420f02b2dc,PodSandboxId:5434a8cdacbb0f5d3b7103d56ab6b2d36450988b9cb155d5716c75e169cd6cc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070216198364459,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724069716321711862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586380327009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586332043521,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724069574416895537,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724069572336626960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724069560632227223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724069560496499103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76b8d925-8af0-4bcf-91af-d8c2cd52df40 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	866e8aa44d27a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   81a3fea1980df       storage-provisioner
	37b657589fb9d       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            3                   dd1eb0deb73b0       kube-apiserver-ha-584983
	c27a1380c1eae       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   2                   01c21171dc10d       kube-controller-manager-ha-584983
	34311025d9fbf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   81a3fea1980df       storage-provisioner
	5aff679da8d7d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   cf3b96668670c       busybox-7dff88458-s954n
	807fbcd627afb       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   b874caea6ef56       kube-vip-ha-584983
	ed810ef421941       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   7309ad62e1aef       kindnet-dg8d6
	a07587b992d4d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   fcd51d500dbfa       coredns-6f6b679f8f-jgknw
	dde389f70f091       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      2 minutes ago        Running             kube-proxy                1                   7c17fbf7fe5ee       kube-proxy-w7x44
	11e61168d93d3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   cb3e15ff66a8b       etcd-ha-584983
	2edd88cca052e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Exited              kube-controller-manager   1                   01c21171dc10d       kube-controller-manager-ha-584983
	5c1f3659ab0dc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   e877fe1138e16       coredns-6f6b679f8f-wll5k
	98df8dc7e54d9       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Exited              kube-apiserver            2                   dd1eb0deb73b0       kube-apiserver-ha-584983
	a775313d14a7d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      2 minutes ago        Running             kube-scheduler            1                   5434a8cdacbb0       kube-scheduler-ha-584983
	c12861dc1b990       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   f1429b3feda56       busybox-7dff88458-s954n
	8f808fb6e4a29       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   4d323e7bde365       coredns-6f6b679f8f-jgknw
	214bc022395ba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   9fff4f7ac43f0       coredns-6f6b679f8f-wll5k
	668ac1e0d9a1b       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    13 minutes ago       Exited              kindnet-cni               0                   a9564aacd2862       kindnet-dg8d6
	6bcc112384a64       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago       Exited              kube-proxy                0                   f34234b07799c       kube-proxy-w7x44
	799883f5e6773       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago       Exited              kube-scheduler            0                   22491c1bbcc37       kube-scheduler-ha-584983
	e2afc8c50fc96       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   f3aa1be314868       etcd-ha-584983
	
	
	==> coredns [214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577] <==
	[INFO] 10.244.2.2:37261 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.045080098s
	[INFO] 10.244.2.2:51973 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000220525s
	[INFO] 10.244.2.2:34558 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142147s
	[INFO] 10.244.2.2:33150 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000263427s
	[INFO] 10.244.0.4:58833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122286s
	[INFO] 10.244.0.4:37194 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129784s
	[INFO] 10.244.1.2:34921 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001829614s
	[INFO] 10.244.1.2:53910 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115811s
	[INFO] 10.244.1.2:37254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008251s
	[INFO] 10.244.2.2:43347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012466s
	[INFO] 10.244.2.2:51494 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000210784s
	[INFO] 10.244.2.2:36691 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100028s
	[INFO] 10.244.0.4:39140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124263s
	[INFO] 10.244.0.4:51765 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099031s
	[INFO] 10.244.0.4:47207 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071543s
	[INFO] 10.244.0.4:37638 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047976s
	[INFO] 10.244.1.2:36335 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080226s
	[INFO] 10.244.2.2:35526 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107074s
	[INFO] 10.244.2.2:52104 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134936s
	[INFO] 10.244.0.4:37351 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075755s
	[INFO] 10.244.0.4:47268 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121675s
	[INFO] 10.244.1.2:55348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134871s
	[INFO] 10.244.1.2:40841 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064834s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5c1f3659ab0dcde5fa06045786ae82f4ec6748a5d8657bb0cbcbbc9187ba1f2a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:53192->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:53192->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43] <==
	[INFO] 10.244.2.2:34121 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214046s
	[INFO] 10.244.2.2:56792 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.042778287s
	[INFO] 10.244.0.4:59540 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001673972s
	[INFO] 10.244.0.4:42038 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221674s
	[INFO] 10.244.0.4:46483 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00106179s
	[INFO] 10.244.0.4:48125 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091001s
	[INFO] 10.244.0.4:37333 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069922s
	[INFO] 10.244.0.4:50093 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066014s
	[INFO] 10.244.1.2:35478 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106276s
	[INFO] 10.244.1.2:32788 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077319s
	[INFO] 10.244.1.2:48500 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062887s
	[INFO] 10.244.1.2:52624 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001380815s
	[INFO] 10.244.1.2:35526 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000215556s
	[INFO] 10.244.2.2:42664 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000169406s
	[INFO] 10.244.1.2:47636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107912s
	[INFO] 10.244.1.2:57549 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000279969s
	[INFO] 10.244.1.2:37605 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079457s
	[INFO] 10.244.2.2:59708 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107128s
	[INFO] 10.244.2.2:54205 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121358s
	[INFO] 10.244.0.4:57068 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104078s
	[INFO] 10.244.0.4:48200 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000268832s
	[INFO] 10.244.1.2:48471 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138271s
	[INFO] 10.244.1.2:34249 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107161s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a07587b992d4dd88c10dbf666987b6fd647ecdcddabd89c95f13daf5e6108fdf] <==
	Trace[773651555]: [10.511655974s] [10.511655974s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43938->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43926->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1571512260]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 12:23:48.480) (total time: 12606ms):
	Trace[1571512260]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43926->10.96.0.1:443: read: connection reset by peer 12606ms (12:24:01.086)
	Trace[1571512260]: [12.606478968s] [12.606478968s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43926->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-584983
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-584983
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=ha-584983
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T12_12_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:12:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-584983
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:26:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:24:20 +0000   Mon, 19 Aug 2024 12:12:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:24:20 +0000   Mon, 19 Aug 2024 12:12:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:24:20 +0000   Mon, 19 Aug 2024 12:12:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:24:20 +0000   Mon, 19 Aug 2024 12:13:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    ha-584983
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9cd8edc3a2c24fa5a0144251d1079f81
	  System UUID:                9cd8edc3-a2c2-4fa5-a014-4251d1079f81
	  Boot ID:                    c299daa9-50a5-44ae-ad64-3cd1ed4c7fa4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s954n              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-jgknw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-wll5k             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-584983                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-dg8d6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-584983             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-584983    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-w7x44                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-584983             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-584983                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 99s                    kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-584983 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-584983 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-584983 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-584983 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	  Warning  ContainerGCFailed        3m15s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m37s (x3 over 3m26s)  kubelet          Node ha-584983 status is now: NodeNotReady
	  Normal   RegisteredNode           107s                   node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	  Normal   RegisteredNode           100s                   node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	  Normal   RegisteredNode           35s                    node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	
	
	Name:               ha-584983-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-584983-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=ha-584983
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_13_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:13:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-584983-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:26:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:25:00 +0000   Mon, 19 Aug 2024 12:24:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:25:00 +0000   Mon, 19 Aug 2024 12:24:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:25:00 +0000   Mon, 19 Aug 2024 12:24:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:25:00 +0000   Mon, 19 Aug 2024 12:24:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-584983-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c94b4594d79f4ec0a8dd96aab1847eb5
	  System UUID:                c94b4594-d79f-4ec0-a8dd-96aab1847eb5
	  Boot ID:                    538b89e0-9118-4821-8280-61972cf6e9f6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xjjwj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-584983-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-pb989                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-584983-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-584983-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-cwlpd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-584983-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-584983-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 87s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     12m                    cidrAllocator    Node ha-584983-m02 status is now: CIDRAssignmentFailed
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-584983-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-584983-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-584983-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  NodeNotReady             9m                     node-controller  Node ha-584983-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node ha-584983-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node ha-584983-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m12s)  kubelet          Node ha-584983-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           107s                   node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  RegisteredNode           100s                   node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  RegisteredNode           35s                    node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	
	
	Name:               ha-584983-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-584983-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=ha-584983
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_14_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:14:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-584983-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:26:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:25:39 +0000   Mon, 19 Aug 2024 12:25:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:25:39 +0000   Mon, 19 Aug 2024 12:25:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:25:39 +0000   Mon, 19 Aug 2024 12:25:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:25:39 +0000   Mon, 19 Aug 2024 12:25:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    ha-584983-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3194fc0bd2184b688dfa8865a1e17091
	  System UUID:                3194fc0b-d218-4b68-8dfa-8865a1e17091
	  Boot ID:                    986ae481-5be0-475f-9760-a1bb9b5327e7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hxh55                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-584983-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-2llhd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-584983-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-584983-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-f4pns                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-584983-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-584983-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 38s                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-584983-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-584983-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-584983-m03 status is now: NodeHasSufficientPID
	  Normal   CIDRAssignmentFailed     11m                cidrAllocator    Node ha-584983-m03 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           11m                node-controller  Node ha-584983-m03 event: Registered Node ha-584983-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-584983-m03 event: Registered Node ha-584983-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-584983-m03 event: Registered Node ha-584983-m03 in Controller
	  Normal   RegisteredNode           107s               node-controller  Node ha-584983-m03 event: Registered Node ha-584983-m03 in Controller
	  Normal   RegisteredNode           100s               node-controller  Node ha-584983-m03 event: Registered Node ha-584983-m03 in Controller
	  Normal   NodeNotReady             67s                node-controller  Node ha-584983-m03 status is now: NodeNotReady
	  Normal   Starting                 55s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 55s                kubelet          Node ha-584983-m03 has been rebooted, boot id: 986ae481-5be0-475f-9760-a1bb9b5327e7
	  Normal   NodeHasSufficientMemory  55s (x2 over 55s)  kubelet          Node ha-584983-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s (x2 over 55s)  kubelet          Node ha-584983-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s (x2 over 55s)  kubelet          Node ha-584983-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                55s                kubelet          Node ha-584983-m03 status is now: NodeReady
	  Normal   RegisteredNode           35s                node-controller  Node ha-584983-m03 event: Registered Node ha-584983-m03 in Controller
	
	
	Name:               ha-584983-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-584983-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=ha-584983
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_15_50_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:15:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-584983-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:25:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:25:55 +0000   Mon, 19 Aug 2024 12:25:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:25:55 +0000   Mon, 19 Aug 2024 12:25:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:25:55 +0000   Mon, 19 Aug 2024 12:25:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:25:55 +0000   Mon, 19 Aug 2024 12:25:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-584983-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cfad94302a8543c8a125516d743187f8
	  System UUID:                cfad9430-2a85-43c8-a125-516d743187f8
	  Boot ID:                    a2843058-f472-4998-842f-faede31ec3e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-cjfsg       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-rd7nb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-584983-m04 status is now: NodeHasSufficientMemory
	  Normal   CIDRAssignmentFailed     10m                cidrAllocator    Node ha-584983-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           10m                node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-584983-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-584983-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal   NodeReady                9m55s              kubelet          Node ha-584983-m04 status is now: NodeReady
	  Normal   RegisteredNode           107s               node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal   RegisteredNode           100s               node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal   NodeNotReady             67s                node-controller  Node ha-584983-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           35s                node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s                 kubelet          Node ha-584983-m04 has been rebooted, boot id: a2843058-f472-4998-842f-faede31ec3e3
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-584983-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-584983-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-584983-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                8s                 kubelet          Node ha-584983-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.203525] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.059214] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056195] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.190463] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.123484] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.274294] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +3.959189] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +3.909497] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.056139] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.254427] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.090121] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.241682] kauditd_printk_skb: 60 callbacks suppressed
	[Aug19 12:13] kauditd_printk_skb: 26 callbacks suppressed
	[Aug19 12:23] systemd-fstab-generator[3497]: Ignoring "noauto" option for root device
	[  +0.144911] systemd-fstab-generator[3509]: Ignoring "noauto" option for root device
	[  +0.169889] systemd-fstab-generator[3523]: Ignoring "noauto" option for root device
	[  +0.160744] systemd-fstab-generator[3535]: Ignoring "noauto" option for root device
	[  +0.271195] systemd-fstab-generator[3563]: Ignoring "noauto" option for root device
	[  +7.531724] systemd-fstab-generator[3664]: Ignoring "noauto" option for root device
	[  +0.092689] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.554400] kauditd_printk_skb: 12 callbacks suppressed
	[ +12.010494] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.053630] kauditd_printk_skb: 2 callbacks suppressed
	[Aug19 12:24] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [11e61168d93d3e839804c2d5ae9154af1a72c022e0e39ff8db616ac0b7e70ce3] <==
	{"level":"warn","ts":"2024-08-19T12:25:04.317651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:25:04.417609Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:25:04.444902Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:25:04.516844Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T12:25:06.394790Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.206:2380/version","remote-member-id":"cc5def239a0ccee8","error":"Get \"https://192.168.39.206:2380/version\": dial tcp 192.168.39.206:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T12:25:06.394916Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"cc5def239a0ccee8","error":"Get \"https://192.168.39.206:2380/version\": dial tcp 192.168.39.206:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T12:25:07.518037Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"cc5def239a0ccee8","rtt":"0s","error":"dial tcp 192.168.39.206:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T12:25:07.518332Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"cc5def239a0ccee8","rtt":"0s","error":"dial tcp 192.168.39.206:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T12:25:10.397114Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.206:2380/version","remote-member-id":"cc5def239a0ccee8","error":"Get \"https://192.168.39.206:2380/version\": dial tcp 192.168.39.206:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T12:25:10.397364Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"cc5def239a0ccee8","error":"Get \"https://192.168.39.206:2380/version\": dial tcp 192.168.39.206:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T12:25:12.518480Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"cc5def239a0ccee8","rtt":"0s","error":"dial tcp 192.168.39.206:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T12:25:12.518639Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"cc5def239a0ccee8","rtt":"0s","error":"dial tcp 192.168.39.206:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T12:25:14.399864Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.206:2380/version","remote-member-id":"cc5def239a0ccee8","error":"Get \"https://192.168.39.206:2380/version\": dial tcp 192.168.39.206:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T12:25:14.399936Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"cc5def239a0ccee8","error":"Get \"https://192.168.39.206:2380/version\": dial tcp 192.168.39.206:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T12:25:17.519353Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"cc5def239a0ccee8","rtt":"0s","error":"dial tcp 192.168.39.206:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T12:25:17.519571Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"cc5def239a0ccee8","rtt":"0s","error":"dial tcp 192.168.39.206:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T12:25:18.402121Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.206:2380/version","remote-member-id":"cc5def239a0ccee8","error":"Get \"https://192.168.39.206:2380/version\": dial tcp 192.168.39.206:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T12:25:18.402194Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"cc5def239a0ccee8","error":"Get \"https://192.168.39.206:2380/version\": dial tcp 192.168.39.206:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-19T12:25:20.537819Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:25:20.554013Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:25:20.554432Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ab0e927fe14112bb","to":"cc5def239a0ccee8","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-19T12:25:20.554566Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:25:20.555183Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:25:20.561025Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ab0e927fe14112bb","to":"cc5def239a0ccee8","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-19T12:25:20.561087Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8"}
	
	
	==> etcd [e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7] <==
	2024/08/19 12:21:49 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-19T12:21:49.645076Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":1349832400158465101,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T12:21:49.655349Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.12:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:21:49.655394Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.12:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T12:21:49.655530Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ab0e927fe14112bb","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-19T12:21:49.655725Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f2d0f96c14472fad"}
	{"level":"info","ts":"2024-08-19T12:21:49.655760Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f2d0f96c14472fad"}
	{"level":"info","ts":"2024-08-19T12:21:49.655789Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f2d0f96c14472fad"}
	{"level":"info","ts":"2024-08-19T12:21:49.655863Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad"}
	{"level":"info","ts":"2024-08-19T12:21:49.655930Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad"}
	{"level":"info","ts":"2024-08-19T12:21:49.655962Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad"}
	{"level":"info","ts":"2024-08-19T12:21:49.655988Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f2d0f96c14472fad"}
	{"level":"info","ts":"2024-08-19T12:21:49.655995Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:21:49.656002Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:21:49.656018Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:21:49.656088Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:21:49.656134Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:21:49.656179Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:21:49.656191Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:21:49.659023Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.12:2380"}
	{"level":"warn","ts":"2024-08-19T12:21:49.659047Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.015988049s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-19T12:21:49.659192Z","caller":"traceutil/trace.go:171","msg":"trace[817397716] range","detail":"{range_begin:; range_end:; }","duration":"9.016145206s","start":"2024-08-19T12:21:40.643038Z","end":"2024-08-19T12:21:49.659183Z","steps":["trace[817397716] 'agreement among raft nodes before linearized reading'  (duration: 9.015985715s)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T12:21:49.659139Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.12:2380"}
	{"level":"info","ts":"2024-08-19T12:21:49.659375Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-584983","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.12:2380"],"advertise-client-urls":["https://192.168.39.12:2379"]}
	{"level":"error","ts":"2024-08-19T12:21:49.659265Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 12:26:03 up 13 min,  0 users,  load average: 0.27, 0.48, 0.30
	Linux ha-584983 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb] <==
	I0819 12:21:15.486375       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:21:25.486949       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0819 12:21:25.487036       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:21:25.487239       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:21:25.487271       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	I0819 12:21:25.487351       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:21:25.487371       1 main.go:299] handling current node
	I0819 12:21:25.487392       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:21:25.487407       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:21:35.493559       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0819 12:21:35.493690       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:21:35.493871       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:21:35.493899       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	I0819 12:21:35.493988       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:21:35.495557       1 main.go:299] handling current node
	I0819 12:21:35.495791       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:21:35.495916       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:21:45.493553       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:21:45.493670       1 main.go:299] handling current node
	I0819 12:21:45.493706       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:21:45.493769       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:21:45.493985       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0819 12:21:45.494016       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:21:45.494100       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:21:45.494156       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ed810ef421941efecdfa09118a5ac05c7340097cc3ece3e9cb5c8ff0bc78645b] <==
	I0819 12:25:27.998161       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	I0819 12:25:37.994554       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:25:37.994680       1 main.go:299] handling current node
	I0819 12:25:37.994726       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:25:37.994749       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:25:37.994916       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0819 12:25:37.994941       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:25:37.995008       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:25:37.995026       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	I0819 12:25:47.998984       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:25:47.999053       1 main.go:299] handling current node
	I0819 12:25:47.999108       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:25:47.999118       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:25:47.999375       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0819 12:25:47.999430       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:25:47.999680       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:25:47.999713       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	I0819 12:25:57.994017       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:25:57.994061       1 main.go:299] handling current node
	I0819 12:25:57.994074       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:25:57.994102       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:25:57.994251       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0819 12:25:57.994277       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:25:57.994342       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:25:57.994362       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [37b657589fb9d48765eb945154358e1a9fe0c2112fc0647102f121444c4e58be] <==
	I0819 12:24:20.045157       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0819 12:24:20.131416       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 12:24:20.131821       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 12:24:20.141133       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 12:24:20.141507       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 12:24:20.141576       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 12:24:20.141681       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 12:24:20.145310       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 12:24:20.146416       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 12:24:20.149118       1 aggregator.go:171] initial CRD sync complete...
	I0819 12:24:20.149374       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 12:24:20.149412       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 12:24:20.149525       1 cache.go:39] Caches are synced for autoregister controller
	I0819 12:24:20.168008       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 12:24:20.187182       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 12:24:20.196959       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 12:24:20.196999       1 policy_source.go:224] refreshing policies
	I0819 12:24:20.248276       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0819 12:24:20.310280       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.206 192.168.39.84]
	I0819 12:24:20.311770       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 12:24:20.317935       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0819 12:24:20.323658       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0819 12:24:21.039763       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 12:24:21.438856       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.12 192.168.39.206 192.168.39.84]
	W0819 12:24:31.439093       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.12 192.168.39.84]
	
	
	==> kube-apiserver [98df8dc7e54d9e2cff0abca07f0590d77288a28af781674b29c81ecb374d0ec6] <==
	I0819 12:23:36.889512       1 options.go:228] external host was not specified, using 192.168.39.12
	I0819 12:23:36.916623       1 server.go:142] Version: v1.31.0
	I0819 12:23:36.916662       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:23:37.680341       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 12:23:37.703384       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 12:23:37.705633       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 12:23:37.707692       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 12:23:37.707903       1 instance.go:232] Using reconciler: lease
	W0819 12:23:57.674846       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0819 12:23:57.674845       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0819 12:23:57.708409       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2edd88cca052e9928e943781e7ce66a36df09c2c405816c5a70428dd93e8bc38] <==
	I0819 12:23:37.794928       1 serving.go:386] Generated self-signed cert in-memory
	I0819 12:23:38.038946       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 12:23:38.038985       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:23:38.040989       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 12:23:38.041679       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 12:23:38.041829       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 12:23:38.041935       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0819 12:23:58.714248       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.12:8443/healthz\": dial tcp 192.168.39.12:8443: connect: connection refused"
	
	
	==> kube-controller-manager [c27a1380c1eae06afae268f383c814fcbe5f20738383d5ada071203cc535dcc9] <==
	I0819 12:24:46.103213       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="210.588µs"
	I0819 12:24:56.738163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m03"
	I0819 12:24:56.739115       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-584983-m04"
	I0819 12:24:56.741735       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:24:56.779118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m03"
	I0819 12:24:56.796361       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:24:56.846728       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.827421ms"
	I0819 12:24:56.846882       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.575µs"
	I0819 12:24:58.790606       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:25:00.298332       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m02"
	I0819 12:25:02.001878       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:25:08.865805       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m03"
	I0819 12:25:08.881837       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m03"
	I0819 12:25:08.995890       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m03"
	I0819 12:25:09.868788       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="69.853µs"
	I0819 12:25:11.973802       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m03"
	I0819 12:25:23.391357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.84146ms"
	I0819 12:25:23.392327       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.836µs"
	I0819 12:25:28.354751       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:25:28.444095       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:25:39.438585       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m03"
	I0819 12:25:55.757242       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-584983-m04"
	I0819 12:25:55.757982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:25:55.773178       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:25:56.997608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	
	
	==> kube-proxy [6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0] <==
	E0819 12:20:42.561243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:20:45.630354       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:20:45.630558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:20:45.631115       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:20:45.631181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:20:45.631529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:20:45.631589       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:20:51.774822       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:20:51.774891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:20:51.774978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:20:51.775017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:20:51.775080       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:20:51.775095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:21:00.992120       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:21:00.992367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:21:04.063888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:21:04.064310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:21:04.064405       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:21:04.064527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:21:25.566627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:21:25.566899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:21:28.637926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:21:28.637985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:21:31.710979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:21:31.711638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [dde389f70f0915580e61352a0435612ace8bd67c298eabc63daa4683a32cd430] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:23:40.734026       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-584983\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 12:23:43.806737       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-584983\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 12:23:46.878843       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-584983\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 12:23:53.021968       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-584983\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 12:24:05.310179       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-584983\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0819 12:24:24.403293       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.12"]
	E0819 12:24:24.403496       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:24:24.443854       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 12:24:24.443905       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 12:24:24.443982       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:24:24.446750       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:24:24.447130       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:24:24.447173       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:24:24.449061       1 config.go:197] "Starting service config controller"
	I0819 12:24:24.449162       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:24:24.449219       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:24:24.449281       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:24:24.450054       1 config.go:326] "Starting node config controller"
	I0819 12:24:24.450105       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:24:24.549500       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 12:24:24.549564       1 shared_informer.go:320] Caches are synced for service config
	I0819 12:24:24.550927       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94] <==
	E0819 12:12:45.070724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:45.161155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 12:12:45.161289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 12:12:45.489324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 12:15:14.882392       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hxh55\": pod busybox-7dff88458-hxh55 is already assigned to node \"ha-584983-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hxh55" node="ha-584983-m03"
	E0819 12:15:14.883871       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hxh55\": pod busybox-7dff88458-hxh55 is already assigned to node \"ha-584983-m03\"" pod="default/busybox-7dff88458-hxh55"
	E0819 12:15:50.520891       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fbckf\": pod kube-proxy-fbckf is already assigned to node \"ha-584983-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fbckf" node="ha-584983-m04"
	E0819 12:15:50.521012       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d49d2a08-95ae-4ebd-a52e-3d0575bfd9f5(kube-system/kube-proxy-fbckf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fbckf"
	E0819 12:15:50.521045       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fbckf\": pod kube-proxy-fbckf is already assigned to node \"ha-584983-m04\"" pod="kube-system/kube-proxy-fbckf"
	I0819 12:15:50.521064       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fbckf" node="ha-584983-m04"
	E0819 12:21:40.432533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0819 12:21:41.258178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0819 12:21:42.415879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0819 12:21:42.568634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0819 12:21:42.952549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0819 12:21:44.332727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0819 12:21:44.423973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0819 12:21:45.087916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0819 12:21:45.373710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0819 12:21:45.734352       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0819 12:21:46.076058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0819 12:21:47.327396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0819 12:21:47.448497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0819 12:21:47.856078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0819 12:21:49.556967       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a775313d14a7d28cd862d189b963cabab90bcea02d9da5f1ae2038420f02b2dc] <==
	W0819 12:24:13.878380       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.12:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:13.878515       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.12:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:14.591422       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.12:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:14.591696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.12:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:15.954181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.12:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:15.954252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.12:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:16.046882       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.12:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:16.046950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.12:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:16.318614       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.12:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:16.318701       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.12:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:16.362195       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.12:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:16.362278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.12:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:16.552891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.12:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:16.552967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.12:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:16.675379       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:16.675541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.12:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:16.847648       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.12:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:16.847711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.12:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:17.193947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:17.194015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.12:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:17.520707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.12:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:17.520752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.12:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:20.074404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 12:24:20.074601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 12:24:32.529621       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 12:24:48 ha-584983 kubelet[1315]: E0819 12:24:48.647740    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070288647132819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:24:58 ha-584983 kubelet[1315]: E0819 12:24:58.650041    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070298649809774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:24:58 ha-584983 kubelet[1315]: E0819 12:24:58.650071    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070298649809774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:24:59 ha-584983 kubelet[1315]: I0819 12:24:59.437195    1315 scope.go:117] "RemoveContainer" containerID="34311025d9fbfa21d71de6e44283647efd3710d39935b85205a4b0662e34017c"
	Aug 19 12:25:00 ha-584983 kubelet[1315]: I0819 12:25:00.569190    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-s954n" podStartSLOduration=585.74719518 podStartE2EDuration="9m46.569162646s" podCreationTimestamp="2024-08-19 12:15:14 +0000 UTC" firstStartedPulling="2024-08-19 12:15:15.48673195 +0000 UTC m=+147.195491657" lastFinishedPulling="2024-08-19 12:15:16.308699414 +0000 UTC m=+148.017459123" observedRunningTime="2024-08-19 12:15:17.082560913 +0000 UTC m=+148.791320620" watchObservedRunningTime="2024-08-19 12:25:00.569162646 +0000 UTC m=+732.277922372"
	Aug 19 12:25:08 ha-584983 kubelet[1315]: E0819 12:25:08.651681    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070308651341374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:25:08 ha-584983 kubelet[1315]: E0819 12:25:08.651717    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070308651341374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:25:14 ha-584983 kubelet[1315]: I0819 12:25:14.436883    1315 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-584983" podUID="ed94966c-ea7e-4d24-88c3-2a5106f418d4"
	Aug 19 12:25:14 ha-584983 kubelet[1315]: I0819 12:25:14.456238    1315 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-584983"
	Aug 19 12:25:18 ha-584983 kubelet[1315]: I0819 12:25:18.456554    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-584983" podStartSLOduration=4.456524391 podStartE2EDuration="4.456524391s" podCreationTimestamp="2024-08-19 12:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-19 12:25:18.456356656 +0000 UTC m=+750.165116381" watchObservedRunningTime="2024-08-19 12:25:18.456524391 +0000 UTC m=+750.165284121"
	Aug 19 12:25:18 ha-584983 kubelet[1315]: E0819 12:25:18.664392    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070318658035665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:25:18 ha-584983 kubelet[1315]: E0819 12:25:18.664566    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070318658035665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:25:28 ha-584983 kubelet[1315]: E0819 12:25:28.666202    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070328665870157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:25:28 ha-584983 kubelet[1315]: E0819 12:25:28.666225    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070328665870157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:25:38 ha-584983 kubelet[1315]: E0819 12:25:38.669021    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070338668096434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:25:38 ha-584983 kubelet[1315]: E0819 12:25:38.669656    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070338668096434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:25:48 ha-584983 kubelet[1315]: E0819 12:25:48.453658    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 12:25:48 ha-584983 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 12:25:48 ha-584983 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 12:25:48 ha-584983 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 12:25:48 ha-584983 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 12:25:48 ha-584983 kubelet[1315]: E0819 12:25:48.671850    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070348671575841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:25:48 ha-584983 kubelet[1315]: E0819 12:25:48.671882    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070348671575841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:25:58 ha-584983 kubelet[1315]: E0819 12:25:58.674640    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070358674122467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:25:58 ha-584983 kubelet[1315]: E0819 12:25:58.674669    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070358674122467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 12:26:02.804766 2112178 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19479-2086935/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-584983 -n ha-584983
helpers_test.go:261: (dbg) Run:  kubectl --context ha-584983 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 stop -v=7 --alsologtostderr
E0819 12:26:26.668444 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-584983 stop -v=7 --alsologtostderr: exit status 82 (2m0.479219481s)

                                                
                                                
-- stdout --
	* Stopping node "ha-584983-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:26:22.458766 2112590 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:26:22.458921 2112590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:26:22.458933 2112590 out.go:358] Setting ErrFile to fd 2...
	I0819 12:26:22.458940 2112590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:26:22.459149 2112590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:26:22.459432 2112590 out.go:352] Setting JSON to false
	I0819 12:26:22.459542 2112590 mustload.go:65] Loading cluster: ha-584983
	I0819 12:26:22.459967 2112590 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:26:22.460088 2112590 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:26:22.460330 2112590 mustload.go:65] Loading cluster: ha-584983
	I0819 12:26:22.460497 2112590 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:26:22.460550 2112590 stop.go:39] StopHost: ha-584983-m04
	I0819 12:26:22.460968 2112590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:26:22.461027 2112590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:26:22.476778 2112590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34085
	I0819 12:26:22.477386 2112590 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:26:22.478014 2112590 main.go:141] libmachine: Using API Version  1
	I0819 12:26:22.478041 2112590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:26:22.478420 2112590 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:26:22.480697 2112590 out.go:177] * Stopping node "ha-584983-m04"  ...
	I0819 12:26:22.481612 2112590 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 12:26:22.481647 2112590 main.go:141] libmachine: (ha-584983-m04) Calling .DriverName
	I0819 12:26:22.481883 2112590 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 12:26:22.481915 2112590 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHHostname
	I0819 12:26:22.485059 2112590 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:26:22.485492 2112590 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:25:50 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:26:22.485532 2112590 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:26:22.485659 2112590 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHPort
	I0819 12:26:22.485830 2112590 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHKeyPath
	I0819 12:26:22.485952 2112590 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHUsername
	I0819 12:26:22.486077 2112590 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m04/id_rsa Username:docker}
	I0819 12:26:22.567506 2112590 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 12:26:22.620836 2112590 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 12:26:22.672907 2112590 main.go:141] libmachine: Stopping "ha-584983-m04"...
	I0819 12:26:22.672935 2112590 main.go:141] libmachine: (ha-584983-m04) Calling .GetState
	I0819 12:26:22.674566 2112590 main.go:141] libmachine: (ha-584983-m04) Calling .Stop
	I0819 12:26:22.678441 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 0/120
	I0819 12:26:23.680030 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 1/120
	I0819 12:26:24.681434 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 2/120
	I0819 12:26:25.683006 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 3/120
	I0819 12:26:26.684271 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 4/120
	I0819 12:26:27.686071 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 5/120
	I0819 12:26:28.687612 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 6/120
	I0819 12:26:29.689254 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 7/120
	I0819 12:26:30.690576 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 8/120
	I0819 12:26:31.692222 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 9/120
	I0819 12:26:32.694626 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 10/120
	I0819 12:26:33.696087 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 11/120
	I0819 12:26:34.697424 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 12/120
	I0819 12:26:35.698817 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 13/120
	I0819 12:26:36.700210 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 14/120
	I0819 12:26:37.701762 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 15/120
	I0819 12:26:38.703668 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 16/120
	I0819 12:26:39.705144 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 17/120
	I0819 12:26:40.706786 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 18/120
	I0819 12:26:41.708238 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 19/120
	I0819 12:26:42.710280 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 20/120
	I0819 12:26:43.711829 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 21/120
	I0819 12:26:44.713369 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 22/120
	I0819 12:26:45.714609 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 23/120
	I0819 12:26:46.716366 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 24/120
	I0819 12:26:47.718339 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 25/120
	I0819 12:26:48.719780 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 26/120
	I0819 12:26:49.721304 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 27/120
	I0819 12:26:50.722844 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 28/120
	I0819 12:26:51.724125 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 29/120
	I0819 12:26:52.726495 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 30/120
	I0819 12:26:53.727733 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 31/120
	I0819 12:26:54.729202 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 32/120
	I0819 12:26:55.730495 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 33/120
	I0819 12:26:56.731910 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 34/120
	I0819 12:26:57.733849 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 35/120
	I0819 12:26:58.735291 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 36/120
	I0819 12:26:59.736642 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 37/120
	I0819 12:27:00.738147 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 38/120
	I0819 12:27:01.739461 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 39/120
	I0819 12:27:02.741608 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 40/120
	I0819 12:27:03.743573 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 41/120
	I0819 12:27:04.746048 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 42/120
	I0819 12:27:05.747369 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 43/120
	I0819 12:27:06.748635 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 44/120
	I0819 12:27:07.750773 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 45/120
	I0819 12:27:08.752210 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 46/120
	I0819 12:27:09.753376 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 47/120
	I0819 12:27:10.754600 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 48/120
	I0819 12:27:11.755759 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 49/120
	I0819 12:27:12.757758 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 50/120
	I0819 12:27:13.759352 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 51/120
	I0819 12:27:14.760770 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 52/120
	I0819 12:27:15.762907 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 53/120
	I0819 12:27:16.764120 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 54/120
	I0819 12:27:17.765699 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 55/120
	I0819 12:27:18.766998 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 56/120
	I0819 12:27:19.768190 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 57/120
	I0819 12:27:20.770275 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 58/120
	I0819 12:27:21.771527 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 59/120
	I0819 12:27:22.773537 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 60/120
	I0819 12:27:23.774861 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 61/120
	I0819 12:27:24.776966 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 62/120
	I0819 12:27:25.778637 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 63/120
	I0819 12:27:26.779812 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 64/120
	I0819 12:27:27.781488 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 65/120
	I0819 12:27:28.782842 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 66/120
	I0819 12:27:29.784155 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 67/120
	I0819 12:27:30.785552 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 68/120
	I0819 12:27:31.786776 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 69/120
	I0819 12:27:32.788646 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 70/120
	I0819 12:27:33.790775 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 71/120
	I0819 12:27:34.792073 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 72/120
	I0819 12:27:35.793837 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 73/120
	I0819 12:27:36.795048 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 74/120
	I0819 12:27:37.797020 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 75/120
	I0819 12:27:38.798633 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 76/120
	I0819 12:27:39.799898 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 77/120
	I0819 12:27:40.801380 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 78/120
	I0819 12:27:41.802785 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 79/120
	I0819 12:27:42.804338 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 80/120
	I0819 12:27:43.806750 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 81/120
	I0819 12:27:44.808069 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 82/120
	I0819 12:27:45.809441 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 83/120
	I0819 12:27:46.810644 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 84/120
	I0819 12:27:47.812688 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 85/120
	I0819 12:27:48.814205 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 86/120
	I0819 12:27:49.815606 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 87/120
	I0819 12:27:50.817059 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 88/120
	I0819 12:27:51.818601 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 89/120
	I0819 12:27:52.820895 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 90/120
	I0819 12:27:53.822712 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 91/120
	I0819 12:27:54.824435 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 92/120
	I0819 12:27:55.826619 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 93/120
	I0819 12:27:56.828307 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 94/120
	I0819 12:27:57.830462 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 95/120
	I0819 12:27:58.831924 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 96/120
	I0819 12:27:59.833301 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 97/120
	I0819 12:28:00.834815 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 98/120
	I0819 12:28:01.836390 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 99/120
	I0819 12:28:02.838571 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 100/120
	I0819 12:28:03.839785 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 101/120
	I0819 12:28:04.841285 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 102/120
	I0819 12:28:05.842627 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 103/120
	I0819 12:28:06.843948 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 104/120
	I0819 12:28:07.845769 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 105/120
	I0819 12:28:08.847282 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 106/120
	I0819 12:28:09.848698 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 107/120
	I0819 12:28:10.850944 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 108/120
	I0819 12:28:11.852433 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 109/120
	I0819 12:28:12.854500 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 110/120
	I0819 12:28:13.855939 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 111/120
	I0819 12:28:14.857759 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 112/120
	I0819 12:28:15.859163 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 113/120
	I0819 12:28:16.860608 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 114/120
	I0819 12:28:17.862648 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 115/120
	I0819 12:28:18.864020 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 116/120
	I0819 12:28:19.865562 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 117/120
	I0819 12:28:20.867387 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 118/120
	I0819 12:28:21.869656 2112590 main.go:141] libmachine: (ha-584983-m04) Waiting for machine to stop 119/120
	I0819 12:28:22.870104 2112590 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 12:28:22.870189 2112590 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 12:28:22.871661 2112590 out.go:201] 
	W0819 12:28:22.872691 2112590 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 12:28:22.872706 2112590 out.go:270] * 
	* 
	W0819 12:28:22.889627 2112590 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 12:28:22.890859 2112590 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-584983 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr: exit status 3 (19.007909025s)

                                                
                                                
-- stdout --
	ha-584983
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-584983-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:28:22.941030 2113050 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:28:22.941158 2113050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:28:22.941170 2113050 out.go:358] Setting ErrFile to fd 2...
	I0819 12:28:22.941175 2113050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:28:22.941342 2113050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:28:22.941544 2113050 out.go:352] Setting JSON to false
	I0819 12:28:22.941575 2113050 mustload.go:65] Loading cluster: ha-584983
	I0819 12:28:22.941700 2113050 notify.go:220] Checking for updates...
	I0819 12:28:22.941946 2113050 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:28:22.941961 2113050 status.go:255] checking status of ha-584983 ...
	I0819 12:28:22.943395 2113050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:28:22.943445 2113050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:28:22.967068 2113050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44073
	I0819 12:28:22.967686 2113050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:28:22.968458 2113050 main.go:141] libmachine: Using API Version  1
	I0819 12:28:22.968485 2113050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:28:22.968828 2113050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:28:22.969040 2113050 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:28:22.970632 2113050 status.go:330] ha-584983 host status = "Running" (err=<nil>)
	I0819 12:28:22.970651 2113050 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:28:22.970930 2113050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:28:22.970983 2113050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:28:22.987941 2113050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38253
	I0819 12:28:22.988457 2113050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:28:22.988984 2113050 main.go:141] libmachine: Using API Version  1
	I0819 12:28:22.989009 2113050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:28:22.989306 2113050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:28:22.989466 2113050 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:28:22.992251 2113050 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:28:22.992706 2113050 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:28:22.992740 2113050 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:28:22.992833 2113050 host.go:66] Checking if "ha-584983" exists ...
	I0819 12:28:22.993152 2113050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:28:22.993205 2113050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:28:23.008121 2113050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44879
	I0819 12:28:23.008688 2113050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:28:23.009192 2113050 main.go:141] libmachine: Using API Version  1
	I0819 12:28:23.009218 2113050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:28:23.009521 2113050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:28:23.009715 2113050 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:28:23.009960 2113050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:28:23.010002 2113050 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:28:23.012515 2113050 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:28:23.012899 2113050 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:28:23.012924 2113050 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:28:23.013055 2113050 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:28:23.013254 2113050 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:28:23.013402 2113050 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:28:23.013539 2113050 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:28:23.101101 2113050 ssh_runner.go:195] Run: systemctl --version
	I0819 12:28:23.108143 2113050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:28:23.128866 2113050 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:28:23.128909 2113050 api_server.go:166] Checking apiserver status ...
	I0819 12:28:23.128958 2113050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:28:23.151348 2113050 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4908/cgroup
	W0819 12:28:23.164044 2113050 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4908/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:28:23.164103 2113050 ssh_runner.go:195] Run: ls
	I0819 12:28:23.168978 2113050 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:28:23.173566 2113050 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:28:23.173594 2113050 status.go:422] ha-584983 apiserver status = Running (err=<nil>)
	I0819 12:28:23.173604 2113050 status.go:257] ha-584983 status: &{Name:ha-584983 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:28:23.173626 2113050 status.go:255] checking status of ha-584983-m02 ...
	I0819 12:28:23.173958 2113050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:28:23.174009 2113050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:28:23.189793 2113050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
	I0819 12:28:23.190216 2113050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:28:23.190655 2113050 main.go:141] libmachine: Using API Version  1
	I0819 12:28:23.190677 2113050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:28:23.191041 2113050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:28:23.191242 2113050 main.go:141] libmachine: (ha-584983-m02) Calling .GetState
	I0819 12:28:23.192764 2113050 status.go:330] ha-584983-m02 host status = "Running" (err=<nil>)
	I0819 12:28:23.192783 2113050 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:28:23.193088 2113050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:28:23.193111 2113050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:28:23.209576 2113050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39937
	I0819 12:28:23.210074 2113050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:28:23.210592 2113050 main.go:141] libmachine: Using API Version  1
	I0819 12:28:23.210611 2113050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:28:23.210939 2113050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:28:23.211144 2113050 main.go:141] libmachine: (ha-584983-m02) Calling .GetIP
	I0819 12:28:23.214091 2113050 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:28:23.214516 2113050 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:23:41 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:28:23.214555 2113050 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:28:23.214680 2113050 host.go:66] Checking if "ha-584983-m02" exists ...
	I0819 12:28:23.215125 2113050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:28:23.215168 2113050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:28:23.230502 2113050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41193
	I0819 12:28:23.230870 2113050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:28:23.231383 2113050 main.go:141] libmachine: Using API Version  1
	I0819 12:28:23.231403 2113050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:28:23.231701 2113050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:28:23.231885 2113050 main.go:141] libmachine: (ha-584983-m02) Calling .DriverName
	I0819 12:28:23.232035 2113050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:28:23.232057 2113050 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHHostname
	I0819 12:28:23.234767 2113050 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:28:23.235208 2113050 main.go:141] libmachine: (ha-584983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:f2:2d", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:23:41 +0000 UTC Type:0 Mac:52:54:00:2d:f2:2d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-584983-m02 Clientid:01:52:54:00:2d:f2:2d}
	I0819 12:28:23.235237 2113050 main.go:141] libmachine: (ha-584983-m02) DBG | domain ha-584983-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:2d:f2:2d in network mk-ha-584983
	I0819 12:28:23.235384 2113050 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHPort
	I0819 12:28:23.235570 2113050 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHKeyPath
	I0819 12:28:23.235814 2113050 main.go:141] libmachine: (ha-584983-m02) Calling .GetSSHUsername
	I0819 12:28:23.235926 2113050 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m02/id_rsa Username:docker}
	I0819 12:28:23.325590 2113050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:28:23.343138 2113050 kubeconfig.go:125] found "ha-584983" server: "https://192.168.39.254:8443"
	I0819 12:28:23.343170 2113050 api_server.go:166] Checking apiserver status ...
	I0819 12:28:23.343205 2113050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:28:23.361130 2113050 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1356/cgroup
	W0819 12:28:23.371521 2113050 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1356/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:28:23.371588 2113050 ssh_runner.go:195] Run: ls
	I0819 12:28:23.376035 2113050 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 12:28:23.383345 2113050 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 12:28:23.383370 2113050 status.go:422] ha-584983-m02 apiserver status = Running (err=<nil>)
	I0819 12:28:23.383382 2113050 status.go:257] ha-584983-m02 status: &{Name:ha-584983-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:28:23.383406 2113050 status.go:255] checking status of ha-584983-m04 ...
	I0819 12:28:23.383716 2113050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:28:23.383750 2113050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:28:23.399389 2113050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43967
	I0819 12:28:23.399803 2113050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:28:23.400353 2113050 main.go:141] libmachine: Using API Version  1
	I0819 12:28:23.400377 2113050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:28:23.400724 2113050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:28:23.400939 2113050 main.go:141] libmachine: (ha-584983-m04) Calling .GetState
	I0819 12:28:23.402533 2113050 status.go:330] ha-584983-m04 host status = "Running" (err=<nil>)
	I0819 12:28:23.402549 2113050 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:28:23.402835 2113050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:28:23.402869 2113050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:28:23.417720 2113050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33551
	I0819 12:28:23.418081 2113050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:28:23.418530 2113050 main.go:141] libmachine: Using API Version  1
	I0819 12:28:23.418554 2113050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:28:23.418896 2113050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:28:23.419090 2113050 main.go:141] libmachine: (ha-584983-m04) Calling .GetIP
	I0819 12:28:23.421762 2113050 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:28:23.422170 2113050 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:25:50 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:28:23.422198 2113050 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:28:23.422303 2113050 host.go:66] Checking if "ha-584983-m04" exists ...
	I0819 12:28:23.422698 2113050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:28:23.422744 2113050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:28:23.437939 2113050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42165
	I0819 12:28:23.438423 2113050 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:28:23.438876 2113050 main.go:141] libmachine: Using API Version  1
	I0819 12:28:23.438895 2113050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:28:23.439174 2113050 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:28:23.439306 2113050 main.go:141] libmachine: (ha-584983-m04) Calling .DriverName
	I0819 12:28:23.439465 2113050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:28:23.439492 2113050 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHHostname
	I0819 12:28:23.442171 2113050 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:28:23.442592 2113050 main.go:141] libmachine: (ha-584983-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:6c:83", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:25:50 +0000 UTC Type:0 Mac:52:54:00:ee:6c:83 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-584983-m04 Clientid:01:52:54:00:ee:6c:83}
	I0819 12:28:23.442619 2113050 main.go:141] libmachine: (ha-584983-m04) DBG | domain ha-584983-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:ee:6c:83 in network mk-ha-584983
	I0819 12:28:23.442774 2113050 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHPort
	I0819 12:28:23.442964 2113050 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHKeyPath
	I0819 12:28:23.443109 2113050 main.go:141] libmachine: (ha-584983-m04) Calling .GetSSHUsername
	I0819 12:28:23.443248 2113050 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983-m04/id_rsa Username:docker}
	W0819 12:28:41.900394 2113050 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.105:22: connect: no route to host
	W0819 12:28:41.900543 2113050 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.105:22: connect: no route to host
	E0819 12:28:41.900561 2113050 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.105:22: connect: no route to host
	I0819 12:28:41.900568 2113050 status.go:257] ha-584983-m04 status: &{Name:ha-584983-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0819 12:28:41.900589 2113050 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.105:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-584983 -n ha-584983
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-584983 logs -n 25: (1.599948873s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-584983 ssh -n ha-584983-m02 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m03_ha-584983-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m03:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04:/home/docker/cp-test_ha-584983-m03_ha-584983-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983-m04 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m03_ha-584983-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-584983 cp testdata/cp-test.txt                                                | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3290211963/001/cp-test_ha-584983-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983:/home/docker/cp-test_ha-584983-m04_ha-584983.txt                       |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983 sudo cat                                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m04_ha-584983.txt                                 |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m02:/home/docker/cp-test_ha-584983-m04_ha-584983-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983-m02 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m04_ha-584983-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m03:/home/docker/cp-test_ha-584983-m04_ha-584983-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n                                                                 | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | ha-584983-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-584983 ssh -n ha-584983-m03 sudo cat                                          | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | /home/docker/cp-test_ha-584983-m04_ha-584983-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-584983 node stop m02 -v=7                                                     | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-584983 node start m02 -v=7                                                    | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-584983 -v=7                                                           | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-584983 -v=7                                                                | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-584983 --wait=true -v=7                                                    | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:26 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-584983                                                                | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:26 UTC |                     |
	| node    | ha-584983 node delete m03 -v=7                                                   | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:26 UTC | 19 Aug 24 12:26 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-584983 stop -v=7                                                              | ha-584983 | jenkins | v1.33.1 | 19 Aug 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:21:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:21:48.413736 2110774 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:21:48.414056 2110774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:21:48.414067 2110774 out.go:358] Setting ErrFile to fd 2...
	I0819 12:21:48.414072 2110774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:21:48.414272 2110774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:21:48.414923 2110774 out.go:352] Setting JSON to false
	I0819 12:21:48.416136 2110774 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":162258,"bootTime":1723907850,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:21:48.416228 2110774 start.go:139] virtualization: kvm guest
	I0819 12:21:48.418484 2110774 out.go:177] * [ha-584983] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:21:48.419987 2110774 notify.go:220] Checking for updates...
	I0819 12:21:48.419996 2110774 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 12:21:48.421030 2110774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:21:48.422057 2110774 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:21:48.423341 2110774 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:21:48.424484 2110774 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:21:48.425609 2110774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:21:48.427108 2110774 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:21:48.427300 2110774 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:21:48.427746 2110774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:21:48.427825 2110774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:21:48.443601 2110774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40337
	I0819 12:21:48.444043 2110774 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:21:48.444697 2110774 main.go:141] libmachine: Using API Version  1
	I0819 12:21:48.444727 2110774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:21:48.445163 2110774 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:21:48.445376 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:21:48.481575 2110774 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 12:21:48.482670 2110774 start.go:297] selected driver: kvm2
	I0819 12:21:48.482691 2110774 start.go:901] validating driver "kvm2" against &{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:21:48.482882 2110774 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:21:48.483344 2110774 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:21:48.483433 2110774 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 12:21:48.498920 2110774 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 12:21:48.499653 2110774 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:21:48.499737 2110774 cni.go:84] Creating CNI manager for ""
	I0819 12:21:48.499752 2110774 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 12:21:48.499819 2110774 start.go:340] cluster config:
	{Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:21:48.500021 2110774 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:21:48.501556 2110774 out.go:177] * Starting "ha-584983" primary control-plane node in "ha-584983" cluster
	I0819 12:21:48.502552 2110774 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:21:48.502596 2110774 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 12:21:48.502607 2110774 cache.go:56] Caching tarball of preloaded images
	I0819 12:21:48.502686 2110774 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:21:48.502703 2110774 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 12:21:48.502832 2110774 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/config.json ...
	I0819 12:21:48.503020 2110774 start.go:360] acquireMachinesLock for ha-584983: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:21:48.503059 2110774 start.go:364] duration metric: took 22.375µs to acquireMachinesLock for "ha-584983"
	I0819 12:21:48.503074 2110774 start.go:96] Skipping create...Using existing machine configuration
	I0819 12:21:48.503082 2110774 fix.go:54] fixHost starting: 
	I0819 12:21:48.503350 2110774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:21:48.503382 2110774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:21:48.517573 2110774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33509
	I0819 12:21:48.518003 2110774 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:21:48.518486 2110774 main.go:141] libmachine: Using API Version  1
	I0819 12:21:48.518512 2110774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:21:48.518859 2110774 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:21:48.519064 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:21:48.519258 2110774 main.go:141] libmachine: (ha-584983) Calling .GetState
	I0819 12:21:48.520776 2110774 fix.go:112] recreateIfNeeded on ha-584983: state=Running err=<nil>
	W0819 12:21:48.520811 2110774 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 12:21:48.522441 2110774 out.go:177] * Updating the running kvm2 "ha-584983" VM ...
	I0819 12:21:48.523421 2110774 machine.go:93] provisionDockerMachine start ...
	I0819 12:21:48.523438 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:21:48.523642 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:21:48.526208 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.526627 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:21:48.526660 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.526731 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:21:48.526899 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:48.527032 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:48.527169 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:21:48.527334 2110774 main.go:141] libmachine: Using SSH client type: native
	I0819 12:21:48.527586 2110774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:21:48.527602 2110774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 12:21:48.646934 2110774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-584983
	
	I0819 12:21:48.646966 2110774 main.go:141] libmachine: (ha-584983) Calling .GetMachineName
	I0819 12:21:48.647222 2110774 buildroot.go:166] provisioning hostname "ha-584983"
	I0819 12:21:48.647252 2110774 main.go:141] libmachine: (ha-584983) Calling .GetMachineName
	I0819 12:21:48.647433 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:21:48.650070 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.650582 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:21:48.650617 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.650842 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:21:48.651047 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:48.651227 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:48.651380 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:21:48.651589 2110774 main.go:141] libmachine: Using SSH client type: native
	I0819 12:21:48.651872 2110774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:21:48.651889 2110774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-584983 && echo "ha-584983" | sudo tee /etc/hostname
	I0819 12:21:48.783591 2110774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-584983
	
	I0819 12:21:48.783621 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:21:48.786618 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.787045 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:21:48.787065 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.787283 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:21:48.787494 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:48.787693 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:48.787870 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:21:48.788024 2110774 main.go:141] libmachine: Using SSH client type: native
	I0819 12:21:48.788263 2110774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:21:48.788285 2110774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-584983' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-584983/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-584983' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:21:48.906743 2110774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:21:48.906779 2110774 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 12:21:48.906801 2110774 buildroot.go:174] setting up certificates
	I0819 12:21:48.906811 2110774 provision.go:84] configureAuth start
	I0819 12:21:48.906823 2110774 main.go:141] libmachine: (ha-584983) Calling .GetMachineName
	I0819 12:21:48.907121 2110774 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:21:48.909956 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.910351 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:21:48.910372 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.910573 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:21:48.913025 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.913416 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:21:48.913442 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:48.913589 2110774 provision.go:143] copyHostCerts
	I0819 12:21:48.913640 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:21:48.913672 2110774 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 12:21:48.913688 2110774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:21:48.913763 2110774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 12:21:48.913832 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:21:48.913850 2110774 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 12:21:48.913857 2110774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:21:48.913879 2110774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 12:21:48.913917 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:21:48.913933 2110774 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 12:21:48.913939 2110774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:21:48.913960 2110774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 12:21:48.914003 2110774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.ha-584983 san=[127.0.0.1 192.168.39.12 ha-584983 localhost minikube]
	I0819 12:21:49.255336 2110774 provision.go:177] copyRemoteCerts
	I0819 12:21:49.255400 2110774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:21:49.255457 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:21:49.258102 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:49.258441 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:21:49.258469 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:49.258729 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:21:49.258891 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:49.259043 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:21:49.259172 2110774 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:21:49.347714 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 12:21:49.347805 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 12:21:49.375175 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 12:21:49.375246 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0819 12:21:49.402760 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 12:21:49.402839 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 12:21:49.429960 2110774 provision.go:87] duration metric: took 523.133813ms to configureAuth
	I0819 12:21:49.429993 2110774 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:21:49.430259 2110774 config.go:182] Loaded profile config "ha-584983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:21:49.430379 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:21:49.433378 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:49.433800 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:21:49.433826 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:21:49.434051 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:21:49.434278 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:49.434409 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:21:49.434590 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:21:49.434769 2110774 main.go:141] libmachine: Using SSH client type: native
	I0819 12:21:49.434949 2110774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:21:49.434967 2110774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:23:20.279355 2110774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:23:20.279390 2110774 machine.go:96] duration metric: took 1m31.755955533s to provisionDockerMachine
	I0819 12:23:20.279406 2110774 start.go:293] postStartSetup for "ha-584983" (driver="kvm2")
	I0819 12:23:20.279418 2110774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:23:20.279457 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:23:20.279813 2110774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:23:20.279850 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:23:20.283332 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.283911 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:23:20.283938 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.284179 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:23:20.284384 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:23:20.284578 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:23:20.284710 2110774 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:23:20.373423 2110774 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:23:20.377788 2110774 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:23:20.377811 2110774 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 12:23:20.377874 2110774 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 12:23:20.377948 2110774 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 12:23:20.377959 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /etc/ssl/certs/20941932.pem
	I0819 12:23:20.378046 2110774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:23:20.388898 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:23:20.414554 2110774 start.go:296] duration metric: took 135.134148ms for postStartSetup
	I0819 12:23:20.414600 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:23:20.414903 2110774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 12:23:20.414929 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:23:20.417809 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.418183 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:23:20.418207 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.418351 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:23:20.418550 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:23:20.418692 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:23:20.418848 2110774 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	W0819 12:23:20.502212 2110774 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0819 12:23:20.502239 2110774 fix.go:56] duration metric: took 1m31.999156534s for fixHost
	I0819 12:23:20.502268 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:23:20.505115 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.505531 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:23:20.505565 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.505731 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:23:20.505923 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:23:20.506083 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:23:20.506227 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:23:20.506410 2110774 main.go:141] libmachine: Using SSH client type: native
	I0819 12:23:20.506586 2110774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0819 12:23:20.506597 2110774 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:23:20.616811 2110774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724070200.585768225
	
	I0819 12:23:20.616837 2110774 fix.go:216] guest clock: 1724070200.585768225
	I0819 12:23:20.616845 2110774 fix.go:229] Guest: 2024-08-19 12:23:20.585768225 +0000 UTC Remote: 2024-08-19 12:23:20.50224815 +0000 UTC m=+92.127849537 (delta=83.520075ms)
	I0819 12:23:20.616883 2110774 fix.go:200] guest clock delta is within tolerance: 83.520075ms
	I0819 12:23:20.616890 2110774 start.go:83] releasing machines lock for "ha-584983", held for 1m32.113822129s
	I0819 12:23:20.616917 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:23:20.617172 2110774 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:23:20.619827 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.620186 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:23:20.620216 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.620359 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:23:20.620820 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:23:20.621003 2110774 main.go:141] libmachine: (ha-584983) Calling .DriverName
	I0819 12:23:20.621088 2110774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:23:20.621150 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:23:20.621209 2110774 ssh_runner.go:195] Run: cat /version.json
	I0819 12:23:20.621235 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHHostname
	I0819 12:23:20.623943 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.623970 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.624406 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:23:20.624432 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:23:20.624452 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.624475 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:20.624643 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:23:20.624841 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:23:20.624854 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHPort
	I0819 12:23:20.624995 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHKeyPath
	I0819 12:23:20.625004 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:23:20.625142 2110774 main.go:141] libmachine: (ha-584983) Calling .GetSSHUsername
	I0819 12:23:20.625172 2110774 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:23:20.625257 2110774 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/ha-584983/id_rsa Username:docker}
	I0819 12:23:20.730251 2110774 ssh_runner.go:195] Run: systemctl --version
	I0819 12:23:20.736480 2110774 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:23:20.903595 2110774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:23:20.909845 2110774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:23:20.909928 2110774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:23:20.919906 2110774 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 12:23:20.919934 2110774 start.go:495] detecting cgroup driver to use...
	I0819 12:23:20.920008 2110774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:23:20.936656 2110774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:23:20.950921 2110774 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:23:20.950986 2110774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:23:20.965388 2110774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:23:20.978783 2110774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:23:21.125000 2110774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:23:21.271111 2110774 docker.go:233] disabling docker service ...
	I0819 12:23:21.271186 2110774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:23:21.288073 2110774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:23:21.301485 2110774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:23:21.445452 2110774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:23:21.588883 2110774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:23:21.602854 2110774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:23:21.620929 2110774 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:23:21.621010 2110774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:23:21.632214 2110774 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:23:21.632275 2110774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:23:21.642344 2110774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:23:21.652583 2110774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:23:21.663326 2110774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:23:21.673878 2110774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:23:21.685299 2110774 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:23:21.695413 2110774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:23:21.704975 2110774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:23:21.713964 2110774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:23:21.723170 2110774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:23:21.864657 2110774 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:23:28.906500 2110774 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.041792229s)
	I0819 12:23:28.906547 2110774 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:23:28.906606 2110774 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:23:28.912183 2110774 start.go:563] Will wait 60s for crictl version
	I0819 12:23:28.912253 2110774 ssh_runner.go:195] Run: which crictl
	I0819 12:23:28.916331 2110774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:23:28.954382 2110774 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:23:28.954468 2110774 ssh_runner.go:195] Run: crio --version
	I0819 12:23:28.983584 2110774 ssh_runner.go:195] Run: crio --version
	I0819 12:23:29.013592 2110774 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:23:29.014771 2110774 main.go:141] libmachine: (ha-584983) Calling .GetIP
	I0819 12:23:29.017448 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:29.017844 2110774 main.go:141] libmachine: (ha-584983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:50:ea", ip: ""} in network mk-ha-584983: {Iface:virbr1 ExpiryTime:2024-08-19 13:12:21 +0000 UTC Type:0 Mac:52:54:00:be:50:ea Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-584983 Clientid:01:52:54:00:be:50:ea}
	I0819 12:23:29.017874 2110774 main.go:141] libmachine: (ha-584983) DBG | domain ha-584983 has defined IP address 192.168.39.12 and MAC address 52:54:00:be:50:ea in network mk-ha-584983
	I0819 12:23:29.018084 2110774 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:23:29.023014 2110774 kubeadm.go:883] updating cluster {Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:23:29.023164 2110774 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:23:29.023214 2110774 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:23:29.070028 2110774 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:23:29.070056 2110774 crio.go:433] Images already preloaded, skipping extraction
	I0819 12:23:29.070120 2110774 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:23:29.112599 2110774 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:23:29.112627 2110774 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:23:29.112640 2110774 kubeadm.go:934] updating node { 192.168.39.12 8443 v1.31.0 crio true true} ...
	I0819 12:23:29.112778 2110774 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-584983 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:23:29.112862 2110774 ssh_runner.go:195] Run: crio config
	I0819 12:23:29.159105 2110774 cni.go:84] Creating CNI manager for ""
	I0819 12:23:29.159127 2110774 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 12:23:29.159146 2110774 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:23:29.159179 2110774 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-584983 NodeName:ha-584983 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:23:29.159367 2110774 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-584983"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:23:29.159392 2110774 kube-vip.go:115] generating kube-vip config ...
	I0819 12:23:29.159450 2110774 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 12:23:29.171176 2110774 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 12:23:29.171297 2110774 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 12:23:29.171367 2110774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:23:29.180930 2110774 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:23:29.181058 2110774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 12:23:29.190220 2110774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0819 12:23:29.206368 2110774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:23:29.222288 2110774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0819 12:23:29.238152 2110774 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 12:23:29.254848 2110774 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 12:23:29.258515 2110774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:23:29.405862 2110774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:23:29.421009 2110774 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983 for IP: 192.168.39.12
	I0819 12:23:29.421036 2110774 certs.go:194] generating shared ca certs ...
	I0819 12:23:29.421053 2110774 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:23:29.421220 2110774 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 12:23:29.421259 2110774 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 12:23:29.421268 2110774 certs.go:256] generating profile certs ...
	I0819 12:23:29.421336 2110774 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/client.key
	I0819 12:23:29.421361 2110774 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e2964ab4
	I0819 12:23:29.421389 2110774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e2964ab4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.84 192.168.39.206 192.168.39.254]
	I0819 12:23:29.677970 2110774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e2964ab4 ...
	I0819 12:23:29.678005 2110774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e2964ab4: {Name:mk7fd50c9344c6f7fcffebee0d5aa98b656971cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:23:29.678184 2110774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e2964ab4 ...
	I0819 12:23:29.678195 2110774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e2964ab4: {Name:mk7ab8578bd0d4f5cca191ddcdc37726b8725793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:23:29.678267 2110774 certs.go:381] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt.e2964ab4 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt
	I0819 12:23:29.678408 2110774 certs.go:385] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key.e2964ab4 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key
	I0819 12:23:29.678544 2110774 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key
	I0819 12:23:29.678562 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 12:23:29.678575 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 12:23:29.678589 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 12:23:29.678602 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 12:23:29.678614 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 12:23:29.678629 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 12:23:29.678643 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 12:23:29.678655 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 12:23:29.678708 2110774 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 12:23:29.678741 2110774 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 12:23:29.678751 2110774 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:23:29.678773 2110774 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 12:23:29.678795 2110774 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:23:29.678819 2110774 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 12:23:29.678858 2110774 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:23:29.678895 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /usr/share/ca-certificates/20941932.pem
	I0819 12:23:29.678909 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:23:29.678922 2110774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem -> /usr/share/ca-certificates/2094193.pem
	I0819 12:23:29.679579 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:23:29.705701 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:23:29.728689 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:23:29.751471 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 12:23:29.774201 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 12:23:29.797903 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 12:23:29.821550 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:23:29.845338 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/ha-584983/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:23:29.868882 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 12:23:29.891762 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:23:29.914153 2110774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 12:23:29.937347 2110774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:23:29.953746 2110774 ssh_runner.go:195] Run: openssl version
	I0819 12:23:29.959436 2110774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 12:23:29.969716 2110774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 12:23:29.973964 2110774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 12:23:29.974008 2110774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 12:23:29.979511 2110774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:23:29.988441 2110774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:23:29.998936 2110774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:23:30.003193 2110774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:23:30.003250 2110774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:23:30.008770 2110774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:23:30.017716 2110774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 12:23:30.027834 2110774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 12:23:30.031863 2110774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 12:23:30.031916 2110774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 12:23:30.037260 2110774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 12:23:30.045897 2110774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:23:30.050103 2110774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 12:23:30.055388 2110774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 12:23:30.060961 2110774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 12:23:30.066380 2110774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 12:23:30.072044 2110774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 12:23:30.077442 2110774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 12:23:30.083074 2110774 kubeadm.go:392] StartCluster: {Name:ha-584983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-584983 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:23:30.083206 2110774 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:23:30.083258 2110774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:23:30.120858 2110774 cri.go:89] found id: "a1594f4dc78b05f91ea7e36c9c3e0b220cde5d2087cd6e925752c2dc8eba3344"
	I0819 12:23:30.120883 2110774 cri.go:89] found id: "710ed8ebd42cd9f5091e2d029f61807ee84573aca51a8fec83b53682c60bec5d"
	I0819 12:23:30.120886 2110774 cri.go:89] found id: "567f42992a943a79c52c61dd56da3444360dd907a47faf5b97f7dd88dd801b5c"
	I0819 12:23:30.120889 2110774 cri.go:89] found id: "8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43"
	I0819 12:23:30.120891 2110774 cri.go:89] found id: "214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577"
	I0819 12:23:30.120894 2110774 cri.go:89] found id: "c6a144ce90fec4a3b9641d2b329d74eea7d24853fc8e12a26ce60dccdd8cd534"
	I0819 12:23:30.120897 2110774 cri.go:89] found id: "668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb"
	I0819 12:23:30.120899 2110774 cri.go:89] found id: "6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0"
	I0819 12:23:30.120901 2110774 cri.go:89] found id: "993c24f05aeae1e90f7f0d7497ad21ef61f5591c51c27f0b57cb784b5c205744"
	I0819 12:23:30.120908 2110774 cri.go:89] found id: "799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94"
	I0819 12:23:30.120910 2110774 cri.go:89] found id: "65afb8623fcea98f124cd8de3fb6f430c09728b1f5a6ec5fa0ce329353270857"
	I0819 12:23:30.120913 2110774 cri.go:89] found id: "7957c6abca008ceb73562acb3a07469d02c969d7d57c6b92056e92768484729d"
	I0819 12:23:30.120915 2110774 cri.go:89] found id: "e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7"
	I0819 12:23:30.120918 2110774 cri.go:89] found id: ""
	I0819 12:23:30.120962 2110774 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.507613200Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070522507591060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4071eff0-7ee4-42b2-a56e-af635897d419 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.508217425Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b045d46e-5998-4f66-8165-572eabf64ea8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.508291518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b045d46e-5998-4f66-8165-572eabf64ea8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.508808090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:866e8aa44d27a0646aefc6952dbaeac49d6e89538e1aba9bf30ee9e2a9ef517d,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724070299454022856,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b657589fb9d48765eb945154358e1a9fe0c2112fc0647102f121444c4e58be,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070258454335390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27a1380c1eae06afae268f383c814fcbe5f20738383d5ada071203cc535dcc9,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070254460608669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34311025d9fbfa21d71de6e44283647efd3710d39935b85205a4b0662e34017c,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724070250447489261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aff679da8d7ddc2b9edbf72de5eb28a50f319f52523b10bbe669344accacc24,PodSandboxId:cf3b96668670c79772b4c68eb411f61ad7064d2b8f08509e28bc5afd780b3ed6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724070249788074300,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807fbcd627afb16ada9d5d4593d996f49bf9ea0551deb5484c54f23b39eb6c12,PodSandboxId:b874caea6ef56c1711f28b1b8c84b514e779b29024474c964fc005c6a5066a10,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724070230064920055,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd1623c583ed76f894dba28dc6cc6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde389f70f0915580e61352a0435612ace8bd67c298eabc63daa4683a32cd430,PodSandboxId:7c17fbf7fe5ee853199e0dc9908a0e50a2bf15fc22e4537edc5ed196745633a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070216537229055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ed810ef421941efecdfa09118a5ac05c7340097cc3ece3e9cb5c8ff0bc78645b,PodSandboxId:7309ad62e1aef3fa6ff73e1c29dd0947d276a50ea34b94ce256fc848442ad687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724070216835525697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07587b99
2d4dd88c10dbf666987b6fd647ecdcddabd89c95f13daf5e6108fdf,PodSandboxId:fcd51d500dbfa9ad10809dc792763a94de9a2008273183fc927f8464216cd9dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216624782748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edd88cca052e9928e943781e7ce66a36df09c2c405816c5a70428dd93e8bc38,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070216418406358,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e61168d93d3e839804c2d5ae9154af1a72c022e0e39ff8db616ac0b7e70ce3,PodSandboxId:cb3e15ff66a8b914044d28bbd9e5b99e563695912c49eb94bbff887927c4ab3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070216478513524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1f3659ab0dcde5fa06045786ae82f4ec6748a5d8657bb0cbcbbc9187ba1f2a,PodSandboxId:e877fe1138e168e5b43f2e837bab63a0a0b20271a58f8d4f664fa43440aa3506,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216395929739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98df8dc7e54d9e2cff0abca07f0590d77288a28af781674b29c81ecb374d0ec6,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070216310935583,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496
bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775313d14a7d28cd862d189b963cabab90bcea02d9da5f1ae2038420f02b2dc,PodSandboxId:5434a8cdacbb0f5d3b7103d56ab6b2d36450988b9cb155d5716c75e169cd6cc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070216198364459,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724069716321711862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586380327009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586332043521,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724069574416895537,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724069572336626960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724069560632227223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724069560496499103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b045d46e-5998-4f66-8165-572eabf64ea8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.549114709Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e93bb0b-321a-4366-826d-aac8216e226c name=/runtime.v1.RuntimeService/Version
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.549179455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e93bb0b-321a-4366-826d-aac8216e226c name=/runtime.v1.RuntimeService/Version
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.550570666Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c2e78281-896b-4d67-a7d1-837e37707d0c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.550985537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070522550964750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2e78281-896b-4d67-a7d1-837e37707d0c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.551413793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0387ce4-ba8e-4c0b-bb08-4185dcc0d75d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.551523917Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0387ce4-ba8e-4c0b-bb08-4185dcc0d75d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.551909709Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:866e8aa44d27a0646aefc6952dbaeac49d6e89538e1aba9bf30ee9e2a9ef517d,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724070299454022856,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b657589fb9d48765eb945154358e1a9fe0c2112fc0647102f121444c4e58be,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070258454335390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27a1380c1eae06afae268f383c814fcbe5f20738383d5ada071203cc535dcc9,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070254460608669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34311025d9fbfa21d71de6e44283647efd3710d39935b85205a4b0662e34017c,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724070250447489261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aff679da8d7ddc2b9edbf72de5eb28a50f319f52523b10bbe669344accacc24,PodSandboxId:cf3b96668670c79772b4c68eb411f61ad7064d2b8f08509e28bc5afd780b3ed6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724070249788074300,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807fbcd627afb16ada9d5d4593d996f49bf9ea0551deb5484c54f23b39eb6c12,PodSandboxId:b874caea6ef56c1711f28b1b8c84b514e779b29024474c964fc005c6a5066a10,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724070230064920055,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd1623c583ed76f894dba28dc6cc6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde389f70f0915580e61352a0435612ace8bd67c298eabc63daa4683a32cd430,PodSandboxId:7c17fbf7fe5ee853199e0dc9908a0e50a2bf15fc22e4537edc5ed196745633a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070216537229055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ed810ef421941efecdfa09118a5ac05c7340097cc3ece3e9cb5c8ff0bc78645b,PodSandboxId:7309ad62e1aef3fa6ff73e1c29dd0947d276a50ea34b94ce256fc848442ad687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724070216835525697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07587b99
2d4dd88c10dbf666987b6fd647ecdcddabd89c95f13daf5e6108fdf,PodSandboxId:fcd51d500dbfa9ad10809dc792763a94de9a2008273183fc927f8464216cd9dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216624782748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edd88cca052e9928e943781e7ce66a36df09c2c405816c5a70428dd93e8bc38,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070216418406358,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e61168d93d3e839804c2d5ae9154af1a72c022e0e39ff8db616ac0b7e70ce3,PodSandboxId:cb3e15ff66a8b914044d28bbd9e5b99e563695912c49eb94bbff887927c4ab3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070216478513524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1f3659ab0dcde5fa06045786ae82f4ec6748a5d8657bb0cbcbbc9187ba1f2a,PodSandboxId:e877fe1138e168e5b43f2e837bab63a0a0b20271a58f8d4f664fa43440aa3506,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216395929739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98df8dc7e54d9e2cff0abca07f0590d77288a28af781674b29c81ecb374d0ec6,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070216310935583,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496
bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775313d14a7d28cd862d189b963cabab90bcea02d9da5f1ae2038420f02b2dc,PodSandboxId:5434a8cdacbb0f5d3b7103d56ab6b2d36450988b9cb155d5716c75e169cd6cc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070216198364459,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724069716321711862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586380327009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586332043521,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724069574416895537,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724069572336626960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724069560632227223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724069560496499103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0387ce4-ba8e-4c0b-bb08-4185dcc0d75d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.594712520Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=941f2741-24ac-4bbc-a4e7-656cc22033b2 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.594781670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=941f2741-24ac-4bbc-a4e7-656cc22033b2 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.596005064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eccb25b8-91db-42f7-af81-b3bf67dc2dbe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.596490736Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070522596418306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eccb25b8-91db-42f7-af81-b3bf67dc2dbe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.597046910Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=976402e0-42ae-4271-9104-7bd7aca6362c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.597106357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=976402e0-42ae-4271-9104-7bd7aca6362c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.597556408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:866e8aa44d27a0646aefc6952dbaeac49d6e89538e1aba9bf30ee9e2a9ef517d,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724070299454022856,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b657589fb9d48765eb945154358e1a9fe0c2112fc0647102f121444c4e58be,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070258454335390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27a1380c1eae06afae268f383c814fcbe5f20738383d5ada071203cc535dcc9,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070254460608669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34311025d9fbfa21d71de6e44283647efd3710d39935b85205a4b0662e34017c,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724070250447489261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aff679da8d7ddc2b9edbf72de5eb28a50f319f52523b10bbe669344accacc24,PodSandboxId:cf3b96668670c79772b4c68eb411f61ad7064d2b8f08509e28bc5afd780b3ed6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724070249788074300,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807fbcd627afb16ada9d5d4593d996f49bf9ea0551deb5484c54f23b39eb6c12,PodSandboxId:b874caea6ef56c1711f28b1b8c84b514e779b29024474c964fc005c6a5066a10,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724070230064920055,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd1623c583ed76f894dba28dc6cc6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde389f70f0915580e61352a0435612ace8bd67c298eabc63daa4683a32cd430,PodSandboxId:7c17fbf7fe5ee853199e0dc9908a0e50a2bf15fc22e4537edc5ed196745633a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070216537229055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ed810ef421941efecdfa09118a5ac05c7340097cc3ece3e9cb5c8ff0bc78645b,PodSandboxId:7309ad62e1aef3fa6ff73e1c29dd0947d276a50ea34b94ce256fc848442ad687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724070216835525697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07587b99
2d4dd88c10dbf666987b6fd647ecdcddabd89c95f13daf5e6108fdf,PodSandboxId:fcd51d500dbfa9ad10809dc792763a94de9a2008273183fc927f8464216cd9dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216624782748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edd88cca052e9928e943781e7ce66a36df09c2c405816c5a70428dd93e8bc38,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070216418406358,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e61168d93d3e839804c2d5ae9154af1a72c022e0e39ff8db616ac0b7e70ce3,PodSandboxId:cb3e15ff66a8b914044d28bbd9e5b99e563695912c49eb94bbff887927c4ab3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070216478513524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1f3659ab0dcde5fa06045786ae82f4ec6748a5d8657bb0cbcbbc9187ba1f2a,PodSandboxId:e877fe1138e168e5b43f2e837bab63a0a0b20271a58f8d4f664fa43440aa3506,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216395929739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98df8dc7e54d9e2cff0abca07f0590d77288a28af781674b29c81ecb374d0ec6,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070216310935583,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496
bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775313d14a7d28cd862d189b963cabab90bcea02d9da5f1ae2038420f02b2dc,PodSandboxId:5434a8cdacbb0f5d3b7103d56ab6b2d36450988b9cb155d5716c75e169cd6cc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070216198364459,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724069716321711862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586380327009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586332043521,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724069574416895537,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724069572336626960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724069560632227223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724069560496499103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=976402e0-42ae-4271-9104-7bd7aca6362c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.639555608Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d6524cd-6e8e-4ae5-b0aa-dacbdad21c1b name=/runtime.v1.RuntimeService/Version
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.639624118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d6524cd-6e8e-4ae5-b0aa-dacbdad21c1b name=/runtime.v1.RuntimeService/Version
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.640840326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=feb486f9-3cd8-4dd1-9d7d-62b6570a825f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.641672054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070522641648578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=feb486f9-3cd8-4dd1-9d7d-62b6570a825f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.642247780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d8d6d54-346b-4cb6-b64f-ea5e1a275ec9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.642300599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d8d6d54-346b-4cb6-b64f-ea5e1a275ec9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:42 ha-584983 crio[3578]: time="2024-08-19 12:28:42.642744683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:866e8aa44d27a0646aefc6952dbaeac49d6e89538e1aba9bf30ee9e2a9ef517d,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724070299454022856,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b657589fb9d48765eb945154358e1a9fe0c2112fc0647102f121444c4e58be,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070258454335390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27a1380c1eae06afae268f383c814fcbe5f20738383d5ada071203cc535dcc9,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070254460608669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34311025d9fbfa21d71de6e44283647efd3710d39935b85205a4b0662e34017c,PodSandboxId:81a3fea1980df4a21789ea80c9bd94a8c6d635c4aa92b113d4766910b8189920,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724070250447489261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b48e339-40f9-4517-b17d-1290ded5456e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aff679da8d7ddc2b9edbf72de5eb28a50f319f52523b10bbe669344accacc24,PodSandboxId:cf3b96668670c79772b4c68eb411f61ad7064d2b8f08509e28bc5afd780b3ed6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724070249788074300,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807fbcd627afb16ada9d5d4593d996f49bf9ea0551deb5484c54f23b39eb6c12,PodSandboxId:b874caea6ef56c1711f28b1b8c84b514e779b29024474c964fc005c6a5066a10,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724070230064920055,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd1623c583ed76f894dba28dc6cc6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde389f70f0915580e61352a0435612ace8bd67c298eabc63daa4683a32cd430,PodSandboxId:7c17fbf7fe5ee853199e0dc9908a0e50a2bf15fc22e4537edc5ed196745633a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070216537229055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ed810ef421941efecdfa09118a5ac05c7340097cc3ece3e9cb5c8ff0bc78645b,PodSandboxId:7309ad62e1aef3fa6ff73e1c29dd0947d276a50ea34b94ce256fc848442ad687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724070216835525697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07587b99
2d4dd88c10dbf666987b6fd647ecdcddabd89c95f13daf5e6108fdf,PodSandboxId:fcd51d500dbfa9ad10809dc792763a94de9a2008273183fc927f8464216cd9dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216624782748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edd88cca052e9928e943781e7ce66a36df09c2c405816c5a70428dd93e8bc38,PodSandboxId:01c21171dc10d55e726334b9243f19c8bbbb096bc1d27693263d5c6cfa506cc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070216418406358,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0fee7a811b7f6f25b8f9ce1286bf0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e61168d93d3e839804c2d5ae9154af1a72c022e0e39ff8db616ac0b7e70ce3,PodSandboxId:cb3e15ff66a8b914044d28bbd9e5b99e563695912c49eb94bbff887927c4ab3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070216478513524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1f3659ab0dcde5fa06045786ae82f4ec6748a5d8657bb0cbcbbc9187ba1f2a,PodSandboxId:e877fe1138e168e5b43f2e837bab63a0a0b20271a58f8d4f664fa43440aa3506,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070216395929739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98df8dc7e54d9e2cff0abca07f0590d77288a28af781674b29c81ecb374d0ec6,PodSandboxId:dd1eb0deb73b090cf11f8f07b10493fca575cfeea1f0ebe3a761056dd1449b1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070216310935583,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2516d28dfc36ba1496
bb124065913406,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775313d14a7d28cd862d189b963cabab90bcea02d9da5f1ae2038420f02b2dc,PodSandboxId:5434a8cdacbb0f5d3b7103d56ab6b2d36450988b9cb155d5716c75e169cd6cc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070216198364459,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c12861dc1b9904f8778e27e3374079a41095e697d2da350e0965a509ccf815c3,PodSandboxId:f1429b3feda56f49b4a7fa070b2f295b0a17a25557030505fcafc57bb3f57de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724069716321711862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s954n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a49fe8f1-f33a-4a9f-987c-29e97088cd51,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43,PodSandboxId:4d323e7bde36505ffc9589a1db9820522d02d1bad34b106596ec29fc697bd49f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586380327009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jgknw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0e1948-9d0c-49fd-aff6-d23c2d691e7d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577,PodSandboxId:9fff4f7ac43f036c3aa105b42f36af3f1ea66aae0dc92d40c2e6eab3ae806b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724069586332043521,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wll5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8383b1b6-e917-411e-834f-13db39309ec2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb,PodSandboxId:a9564aacd286257156fd6ddca016fe9111ede74c9e17c3316ae4c0a483af1265,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724069574416895537,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dg8d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ae558c-b6ea-4900-b534-ded9e58c1e96,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0,PodSandboxId:f34234b07799c9b3ed50143494abda0b1f49054e57aceba30b86a50a7f5ab5a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724069572336626960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7x44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6c6f31-aa4d-4868-91cd-3cbc67cd6de5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94,PodSandboxId:22491c1bbcc377abb5e8dc3061745f0141d27e010b85f13e7773553b2de1e6bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724069560632227223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb7457b4240720376dc418375c5384d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7,PodSandboxId:f3aa1be3148683154bf7bb7590e0f6f20e9c89699630cd7c9d8cd0d3fa908a73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724069560496499103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-584983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6c2f482ca2067643b235996155a98d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d8d6d54-346b-4cb6-b64f-ea5e1a275ec9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	866e8aa44d27a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   81a3fea1980df       storage-provisioner
	37b657589fb9d       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            3                   dd1eb0deb73b0       kube-apiserver-ha-584983
	c27a1380c1eae       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   2                   01c21171dc10d       kube-controller-manager-ha-584983
	34311025d9fbf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   81a3fea1980df       storage-provisioner
	5aff679da8d7d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   cf3b96668670c       busybox-7dff88458-s954n
	807fbcd627afb       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   b874caea6ef56       kube-vip-ha-584983
	ed810ef421941       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   7309ad62e1aef       kindnet-dg8d6
	a07587b992d4d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   fcd51d500dbfa       coredns-6f6b679f8f-jgknw
	dde389f70f091       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                1                   7c17fbf7fe5ee       kube-proxy-w7x44
	11e61168d93d3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   cb3e15ff66a8b       etcd-ha-584983
	2edd88cca052e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Exited              kube-controller-manager   1                   01c21171dc10d       kube-controller-manager-ha-584983
	5c1f3659ab0dc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   e877fe1138e16       coredns-6f6b679f8f-wll5k
	98df8dc7e54d9       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Exited              kube-apiserver            2                   dd1eb0deb73b0       kube-apiserver-ha-584983
	a775313d14a7d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      5 minutes ago       Running             kube-scheduler            1                   5434a8cdacbb0       kube-scheduler-ha-584983
	c12861dc1b990       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   f1429b3feda56       busybox-7dff88458-s954n
	8f808fb6e4a29       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   4d323e7bde365       coredns-6f6b679f8f-jgknw
	214bc022395ba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   9fff4f7ac43f0       coredns-6f6b679f8f-wll5k
	668ac1e0d9a1b       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    15 minutes ago      Exited              kindnet-cni               0                   a9564aacd2862       kindnet-dg8d6
	6bcc112384a64       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      15 minutes ago      Exited              kube-proxy                0                   f34234b07799c       kube-proxy-w7x44
	799883f5e6773       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      16 minutes ago      Exited              kube-scheduler            0                   22491c1bbcc37       kube-scheduler-ha-584983
	e2afc8c50fc96       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   f3aa1be314868       etcd-ha-584983
	
	
	==> coredns [214bc022395ba156479eeeceea4c4de3e7da6f867a7f5c0ab0e7f259af46e577] <==
	[INFO] 10.244.2.2:37261 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.045080098s
	[INFO] 10.244.2.2:51973 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000220525s
	[INFO] 10.244.2.2:34558 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142147s
	[INFO] 10.244.2.2:33150 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000263427s
	[INFO] 10.244.0.4:58833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122286s
	[INFO] 10.244.0.4:37194 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129784s
	[INFO] 10.244.1.2:34921 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001829614s
	[INFO] 10.244.1.2:53910 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115811s
	[INFO] 10.244.1.2:37254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008251s
	[INFO] 10.244.2.2:43347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012466s
	[INFO] 10.244.2.2:51494 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000210784s
	[INFO] 10.244.2.2:36691 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100028s
	[INFO] 10.244.0.4:39140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124263s
	[INFO] 10.244.0.4:51765 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099031s
	[INFO] 10.244.0.4:47207 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071543s
	[INFO] 10.244.0.4:37638 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047976s
	[INFO] 10.244.1.2:36335 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080226s
	[INFO] 10.244.2.2:35526 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107074s
	[INFO] 10.244.2.2:52104 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134936s
	[INFO] 10.244.0.4:37351 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075755s
	[INFO] 10.244.0.4:47268 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121675s
	[INFO] 10.244.1.2:55348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134871s
	[INFO] 10.244.1.2:40841 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064834s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5c1f3659ab0dcde5fa06045786ae82f4ec6748a5d8657bb0cbcbbc9187ba1f2a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:53192->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:53192->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8f808fb6e4a2970ffa9b73634954db0011676118f4b2a2410dece998a7c98d43] <==
	[INFO] 10.244.2.2:34121 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214046s
	[INFO] 10.244.2.2:56792 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.042778287s
	[INFO] 10.244.0.4:59540 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001673972s
	[INFO] 10.244.0.4:42038 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221674s
	[INFO] 10.244.0.4:46483 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00106179s
	[INFO] 10.244.0.4:48125 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091001s
	[INFO] 10.244.0.4:37333 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069922s
	[INFO] 10.244.0.4:50093 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066014s
	[INFO] 10.244.1.2:35478 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106276s
	[INFO] 10.244.1.2:32788 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077319s
	[INFO] 10.244.1.2:48500 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062887s
	[INFO] 10.244.1.2:52624 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001380815s
	[INFO] 10.244.1.2:35526 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000215556s
	[INFO] 10.244.2.2:42664 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000169406s
	[INFO] 10.244.1.2:47636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107912s
	[INFO] 10.244.1.2:57549 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000279969s
	[INFO] 10.244.1.2:37605 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079457s
	[INFO] 10.244.2.2:59708 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107128s
	[INFO] 10.244.2.2:54205 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121358s
	[INFO] 10.244.0.4:57068 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104078s
	[INFO] 10.244.0.4:48200 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000268832s
	[INFO] 10.244.1.2:48471 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138271s
	[INFO] 10.244.1.2:34249 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107161s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a07587b992d4dd88c10dbf666987b6fd647ecdcddabd89c95f13daf5e6108fdf] <==
	Trace[773651555]: [10.511655974s] [10.511655974s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43938->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43926->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1571512260]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 12:23:48.480) (total time: 12606ms):
	Trace[1571512260]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43926->10.96.0.1:443: read: connection reset by peer 12606ms (12:24:01.086)
	Trace[1571512260]: [12.606478968s] [12.606478968s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43926->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-584983
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-584983
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=ha-584983
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T12_12_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:12:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-584983
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:28:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:24:20 +0000   Mon, 19 Aug 2024 12:12:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:24:20 +0000   Mon, 19 Aug 2024 12:12:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:24:20 +0000   Mon, 19 Aug 2024 12:12:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:24:20 +0000   Mon, 19 Aug 2024 12:13:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    ha-584983
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9cd8edc3a2c24fa5a0144251d1079f81
	  System UUID:                9cd8edc3-a2c2-4fa5-a014-4251d1079f81
	  Boot ID:                    c299daa9-50a5-44ae-ad64-3cd1ed4c7fa4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s954n              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-6f6b679f8f-jgknw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-6f6b679f8f-wll5k             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-584983                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-dg8d6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-584983             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-584983    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-w7x44                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-584983             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-584983                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 4m18s                 kube-proxy       
	  Normal   Starting                 15m                   kube-proxy       
	  Normal   NodeHasSufficientMemory  15m                   kubelet          Node ha-584983 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                   kubelet          Node ha-584983 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m                   kubelet          Node ha-584983 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                   node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	  Normal   NodeReady                15m                   kubelet          Node ha-584983 status is now: NodeReady
	  Normal   RegisteredNode           14m                   node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	  Normal   RegisteredNode           13m                   node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	  Warning  ContainerGCFailed        5m55s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m17s (x3 over 6m6s)  kubelet          Node ha-584983 status is now: NodeNotReady
	  Normal   RegisteredNode           4m27s                 node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	  Normal   RegisteredNode           4m20s                 node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	  Normal   RegisteredNode           3m15s                 node-controller  Node ha-584983 event: Registered Node ha-584983 in Controller
	
	
	Name:               ha-584983-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-584983-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=ha-584983
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_13_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:13:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-584983-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:28:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:25:00 +0000   Mon, 19 Aug 2024 12:24:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:25:00 +0000   Mon, 19 Aug 2024 12:24:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:25:00 +0000   Mon, 19 Aug 2024 12:24:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:25:00 +0000   Mon, 19 Aug 2024 12:24:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-584983-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c94b4594d79f4ec0a8dd96aab1847eb5
	  System UUID:                c94b4594-d79f-4ec0-a8dd-96aab1847eb5
	  Boot ID:                    538b89e0-9118-4821-8280-61972cf6e9f6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xjjwj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-584983-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-pb989                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-584983-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-584983-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-cwlpd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-584983-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-584983-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     15m                    cidrAllocator    Node ha-584983-m02 status is now: CIDRAssignmentFailed
	  Normal  Starting                 15m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-584983-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-584983-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-584983-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-584983-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  4m52s (x8 over 4m52s)  kubelet          Node ha-584983-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    4m52s (x8 over 4m52s)  kubelet          Node ha-584983-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s (x7 over 4m52s)  kubelet          Node ha-584983-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m27s                  node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-584983-m02 event: Registered Node ha-584983-m02 in Controller
	
	
	Name:               ha-584983-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-584983-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=ha-584983
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_15_50_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:15:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-584983-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:26:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 12:25:55 +0000   Mon, 19 Aug 2024 12:26:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 12:25:55 +0000   Mon, 19 Aug 2024 12:26:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 12:25:55 +0000   Mon, 19 Aug 2024 12:26:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 12:25:55 +0000   Mon, 19 Aug 2024 12:26:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-584983-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cfad94302a8543c8a125516d743187f8
	  System UUID:                cfad9430-2a85-43c8-a125-516d743187f8
	  Boot ID:                    a2843058-f472-4998-842f-faede31ec3e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lxx6f    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kindnet-cjfsg              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-rd7nb           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-584983-m04 status is now: NodeHasSufficientMemory
	  Normal   CIDRAssignmentFailed     12m                    cidrAllocator    Node ha-584983-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           12m                    node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-584983-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-584983-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-584983-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m27s                  node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal   RegisteredNode           4m20s                  node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal   RegisteredNode           3m15s                  node-controller  Node ha-584983-m04 event: Registered Node ha-584983-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-584983-m04 has been rebooted, boot id: a2843058-f472-4998-842f-faede31ec3e3
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-584983-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-584983-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-584983-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m48s                  kubelet          Node ha-584983-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s (x2 over 3m47s)   node-controller  Node ha-584983-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.203525] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.059214] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056195] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.190463] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.123484] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.274294] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +3.959189] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +3.909497] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.056139] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.254427] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.090121] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.241682] kauditd_printk_skb: 60 callbacks suppressed
	[Aug19 12:13] kauditd_printk_skb: 26 callbacks suppressed
	[Aug19 12:23] systemd-fstab-generator[3497]: Ignoring "noauto" option for root device
	[  +0.144911] systemd-fstab-generator[3509]: Ignoring "noauto" option for root device
	[  +0.169889] systemd-fstab-generator[3523]: Ignoring "noauto" option for root device
	[  +0.160744] systemd-fstab-generator[3535]: Ignoring "noauto" option for root device
	[  +0.271195] systemd-fstab-generator[3563]: Ignoring "noauto" option for root device
	[  +7.531724] systemd-fstab-generator[3664]: Ignoring "noauto" option for root device
	[  +0.092689] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.554400] kauditd_printk_skb: 12 callbacks suppressed
	[ +12.010494] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.053630] kauditd_printk_skb: 2 callbacks suppressed
	[Aug19 12:24] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [11e61168d93d3e839804c2d5ae9154af1a72c022e0e39ff8db616ac0b7e70ce3] <==
	{"level":"info","ts":"2024-08-19T12:25:20.561025Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ab0e927fe14112bb","to":"cc5def239a0ccee8","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-19T12:25:20.561087Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"warn","ts":"2024-08-19T12:26:08.771350Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.206:34836","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-19T12:26:08.787880Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb switched to configuration voters=(12325950308097266363 17496758794926370733)"}
	{"level":"info","ts":"2024-08-19T12:26:08.790360Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"5f0195cf24a31222","local-member-id":"ab0e927fe14112bb","removed-remote-peer-id":"cc5def239a0ccee8","removed-remote-peer-urls":["https://192.168.39.206:2380"]}
	{"level":"warn","ts":"2024-08-19T12:26:08.790681Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"ab0e927fe14112bb","removed-member-id":"cc5def239a0ccee8"}
	{"level":"warn","ts":"2024-08-19T12:26:08.790779Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-08-19T12:26:08.790920Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"ab0e927fe14112bb","removed-member-id":"cc5def239a0ccee8"}
	{"level":"warn","ts":"2024-08-19T12:26:08.790978Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-08-19T12:26:08.790620Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"warn","ts":"2024-08-19T12:26:08.791547Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:26:08.791620Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"warn","ts":"2024-08-19T12:26:08.793049Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:26:08.796646Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:26:08.796797Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"warn","ts":"2024-08-19T12:26:08.797011Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8","error":"context canceled"}
	{"level":"warn","ts":"2024-08-19T12:26:08.797095Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"cc5def239a0ccee8","error":"failed to read cc5def239a0ccee8 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-19T12:26:08.797151Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"warn","ts":"2024-08-19T12:26:08.797321Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8","error":"context canceled"}
	{"level":"info","ts":"2024-08-19T12:26:08.797390Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:26:08.797549Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:26:08.797687Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"ab0e927fe14112bb","removed-remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:26:08.797785Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"ab0e927fe14112bb","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"cc5def239a0ccee8"}
	{"level":"warn","ts":"2024-08-19T12:26:08.800950Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"ab0e927fe14112bb","remote-peer-id-stream-handler":"ab0e927fe14112bb","remote-peer-id-from":"cc5def239a0ccee8"}
	{"level":"warn","ts":"2024-08-19T12:26:08.810022Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.206:54946","server-name":"","error":"read tcp 192.168.39.12:2380->192.168.39.206:54946: read: connection reset by peer"}
	
	
	==> etcd [e2afc8c50fc96b7ff7995b47f16da736614c4ef2e38b424848d74e874d392fe7] <==
	2024/08/19 12:21:49 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-19T12:21:49.645076Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":1349832400158465101,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T12:21:49.655349Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.12:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:21:49.655394Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.12:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T12:21:49.655530Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ab0e927fe14112bb","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-19T12:21:49.655725Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f2d0f96c14472fad"}
	{"level":"info","ts":"2024-08-19T12:21:49.655760Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f2d0f96c14472fad"}
	{"level":"info","ts":"2024-08-19T12:21:49.655789Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f2d0f96c14472fad"}
	{"level":"info","ts":"2024-08-19T12:21:49.655863Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad"}
	{"level":"info","ts":"2024-08-19T12:21:49.655930Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad"}
	{"level":"info","ts":"2024-08-19T12:21:49.655962Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"f2d0f96c14472fad"}
	{"level":"info","ts":"2024-08-19T12:21:49.655988Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f2d0f96c14472fad"}
	{"level":"info","ts":"2024-08-19T12:21:49.655995Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:21:49.656002Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:21:49.656018Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:21:49.656088Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:21:49.656134Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:21:49.656179Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:21:49.656191Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"cc5def239a0ccee8"}
	{"level":"info","ts":"2024-08-19T12:21:49.659023Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.12:2380"}
	{"level":"warn","ts":"2024-08-19T12:21:49.659047Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.015988049s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-19T12:21:49.659192Z","caller":"traceutil/trace.go:171","msg":"trace[817397716] range","detail":"{range_begin:; range_end:; }","duration":"9.016145206s","start":"2024-08-19T12:21:40.643038Z","end":"2024-08-19T12:21:49.659183Z","steps":["trace[817397716] 'agreement among raft nodes before linearized reading'  (duration: 9.015985715s)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T12:21:49.659139Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.12:2380"}
	{"level":"info","ts":"2024-08-19T12:21:49.659375Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-584983","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.12:2380"],"advertise-client-urls":["https://192.168.39.12:2379"]}
	{"level":"error","ts":"2024-08-19T12:21:49.659265Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 12:28:43 up 16 min,  0 users,  load average: 0.24, 0.40, 0.29
	Linux ha-584983 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [668ac1e0d9a1bf034449a9debe0fb01835dc3b0dd8e95bdd46727069341040fb] <==
	I0819 12:21:15.486375       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:21:25.486949       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0819 12:21:25.487036       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:21:25.487239       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:21:25.487271       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	I0819 12:21:25.487351       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:21:25.487371       1 main.go:299] handling current node
	I0819 12:21:25.487392       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:21:25.487407       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:21:35.493559       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0819 12:21:35.493690       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:21:35.493871       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:21:35.493899       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	I0819 12:21:35.493988       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:21:35.495557       1 main.go:299] handling current node
	I0819 12:21:35.495791       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:21:35.495916       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:21:45.493553       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:21:45.493670       1 main.go:299] handling current node
	I0819 12:21:45.493706       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:21:45.493769       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:21:45.493985       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0819 12:21:45.494016       1 main.go:322] Node ha-584983-m03 has CIDR [10.244.2.0/24] 
	I0819 12:21:45.494100       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:21:45.494156       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ed810ef421941efecdfa09118a5ac05c7340097cc3ece3e9cb5c8ff0bc78645b] <==
	I0819 12:27:58.002969       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	I0819 12:28:07.993973       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:28:07.994251       1 main.go:299] handling current node
	I0819 12:28:07.994311       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:28:07.994341       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:28:07.994651       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:28:07.994700       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	I0819 12:28:17.993996       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:28:17.994127       1 main.go:299] handling current node
	I0819 12:28:17.994165       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:28:17.994171       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:28:17.994358       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:28:17.994383       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	I0819 12:28:27.993929       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:28:27.994086       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	I0819 12:28:27.994256       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:28:27.994280       1 main.go:299] handling current node
	I0819 12:28:27.994302       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:28:27.994318       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:28:37.994805       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0819 12:28:37.994897       1 main.go:299] handling current node
	I0819 12:28:37.994930       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0819 12:28:37.994953       1 main.go:322] Node ha-584983-m02 has CIDR [10.244.1.0/24] 
	I0819 12:28:37.995134       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0819 12:28:37.995156       1 main.go:322] Node ha-584983-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [37b657589fb9d48765eb945154358e1a9fe0c2112fc0647102f121444c4e58be] <==
	I0819 12:24:20.045157       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0819 12:24:20.131416       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 12:24:20.131821       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 12:24:20.141133       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 12:24:20.141507       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 12:24:20.141576       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 12:24:20.141681       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 12:24:20.145310       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 12:24:20.146416       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 12:24:20.149118       1 aggregator.go:171] initial CRD sync complete...
	I0819 12:24:20.149374       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 12:24:20.149412       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 12:24:20.149525       1 cache.go:39] Caches are synced for autoregister controller
	I0819 12:24:20.168008       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 12:24:20.187182       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 12:24:20.196959       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 12:24:20.196999       1 policy_source.go:224] refreshing policies
	I0819 12:24:20.248276       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0819 12:24:20.310280       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.206 192.168.39.84]
	I0819 12:24:20.311770       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 12:24:20.317935       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0819 12:24:20.323658       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0819 12:24:21.039763       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 12:24:21.438856       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.12 192.168.39.206 192.168.39.84]
	W0819 12:24:31.439093       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.12 192.168.39.84]
	
	
	==> kube-apiserver [98df8dc7e54d9e2cff0abca07f0590d77288a28af781674b29c81ecb374d0ec6] <==
	I0819 12:23:36.889512       1 options.go:228] external host was not specified, using 192.168.39.12
	I0819 12:23:36.916623       1 server.go:142] Version: v1.31.0
	I0819 12:23:36.916662       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:23:37.680341       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 12:23:37.703384       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 12:23:37.705633       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 12:23:37.707692       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 12:23:37.707903       1 instance.go:232] Using reconciler: lease
	W0819 12:23:57.674846       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0819 12:23:57.674845       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0819 12:23:57.708409       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2edd88cca052e9928e943781e7ce66a36df09c2c405816c5a70428dd93e8bc38] <==
	I0819 12:23:37.794928       1 serving.go:386] Generated self-signed cert in-memory
	I0819 12:23:38.038946       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 12:23:38.038985       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:23:38.040989       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 12:23:38.041679       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 12:23:38.041829       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 12:23:38.041935       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0819 12:23:58.714248       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.12:8443/healthz\": dial tcp 192.168.39.12:8443: connect: connection refused"
	
	
	==> kube-controller-manager [c27a1380c1eae06afae268f383c814fcbe5f20738383d5ada071203cc535dcc9] <==
	I0819 12:26:57.022735       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:26:57.043584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:26:57.068232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.976822ms"
	I0819 12:26:57.068418       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="65.656µs"
	I0819 12:26:58.762304       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	I0819 12:27:02.180362       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-584983-m04"
	E0819 12:27:03.609238       1 gc_controller.go:151] "Failed to get node" err="node \"ha-584983-m03\" not found" logger="pod-garbage-collector-controller" node="ha-584983-m03"
	E0819 12:27:03.609336       1 gc_controller.go:151] "Failed to get node" err="node \"ha-584983-m03\" not found" logger="pod-garbage-collector-controller" node="ha-584983-m03"
	E0819 12:27:03.609363       1 gc_controller.go:151] "Failed to get node" err="node \"ha-584983-m03\" not found" logger="pod-garbage-collector-controller" node="ha-584983-m03"
	E0819 12:27:03.609390       1 gc_controller.go:151] "Failed to get node" err="node \"ha-584983-m03\" not found" logger="pod-garbage-collector-controller" node="ha-584983-m03"
	E0819 12:27:03.609417       1 gc_controller.go:151] "Failed to get node" err="node \"ha-584983-m03\" not found" logger="pod-garbage-collector-controller" node="ha-584983-m03"
	I0819 12:27:03.621797       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-584983-m03"
	I0819 12:27:03.649730       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-584983-m03"
	I0819 12:27:03.649774       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-584983-m03"
	I0819 12:27:03.680653       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-584983-m03"
	I0819 12:27:03.680799       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-f4pns"
	I0819 12:27:03.713510       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-f4pns"
	I0819 12:27:03.713532       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2llhd"
	I0819 12:27:03.744682       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2llhd"
	I0819 12:27:03.744736       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-584983-m03"
	I0819 12:27:03.769764       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-584983-m03"
	I0819 12:27:03.769891       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-584983-m03"
	I0819 12:27:03.800418       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-584983-m03"
	I0819 12:27:03.800503       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-584983-m03"
	I0819 12:27:03.828620       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-584983-m03"
	
	
	==> kube-proxy [6bcc112384a642654af61d531471ec807ee0247f5266073b82709b10c8c6d7f0] <==
	E0819 12:20:42.561243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:20:45.630354       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:20:45.630558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:20:45.631115       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:20:45.631181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:20:45.631529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:20:45.631589       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:20:51.774822       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:20:51.774891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:20:51.774978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:20:51.775017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:20:51.775080       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:20:51.775095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:21:00.992120       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:21:00.992367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:21:04.063888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:21:04.064310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:21:04.064405       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:21:04.064527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:21:25.566627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:21:25.566899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:21:28.637926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:21:28.637985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 12:21:31.710979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 12:21:31.711638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-584983&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [dde389f70f0915580e61352a0435612ace8bd67c298eabc63daa4683a32cd430] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:23:40.734026       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-584983\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 12:23:43.806737       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-584983\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 12:23:46.878843       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-584983\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 12:23:53.021968       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-584983\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 12:24:05.310179       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-584983\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0819 12:24:24.403293       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.12"]
	E0819 12:24:24.403496       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:24:24.443854       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 12:24:24.443905       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 12:24:24.443982       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:24:24.446750       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:24:24.447130       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:24:24.447173       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:24:24.449061       1 config.go:197] "Starting service config controller"
	I0819 12:24:24.449162       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:24:24.449219       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:24:24.449281       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:24:24.450054       1 config.go:326] "Starting node config controller"
	I0819 12:24:24.450105       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:24:24.549500       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 12:24:24.549564       1 shared_informer.go:320] Caches are synced for service config
	I0819 12:24:24.550927       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [799883f5e6773126a63f1596ba492891fe0d9e2e1d9f2179e64391398bcf7a94] <==
	E0819 12:12:45.070724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:12:45.161155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 12:12:45.161289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 12:12:45.489324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 12:15:14.882392       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hxh55\": pod busybox-7dff88458-hxh55 is already assigned to node \"ha-584983-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hxh55" node="ha-584983-m03"
	E0819 12:15:14.883871       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hxh55\": pod busybox-7dff88458-hxh55 is already assigned to node \"ha-584983-m03\"" pod="default/busybox-7dff88458-hxh55"
	E0819 12:15:50.520891       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fbckf\": pod kube-proxy-fbckf is already assigned to node \"ha-584983-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fbckf" node="ha-584983-m04"
	E0819 12:15:50.521012       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d49d2a08-95ae-4ebd-a52e-3d0575bfd9f5(kube-system/kube-proxy-fbckf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fbckf"
	E0819 12:15:50.521045       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fbckf\": pod kube-proxy-fbckf is already assigned to node \"ha-584983-m04\"" pod="kube-system/kube-proxy-fbckf"
	I0819 12:15:50.521064       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fbckf" node="ha-584983-m04"
	E0819 12:21:40.432533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0819 12:21:41.258178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0819 12:21:42.415879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0819 12:21:42.568634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0819 12:21:42.952549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0819 12:21:44.332727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0819 12:21:44.423973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0819 12:21:45.087916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0819 12:21:45.373710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0819 12:21:45.734352       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0819 12:21:46.076058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0819 12:21:47.327396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0819 12:21:47.448497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0819 12:21:47.856078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0819 12:21:49.556967       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a775313d14a7d28cd862d189b963cabab90bcea02d9da5f1ae2038420f02b2dc] <==
	W0819 12:24:15.954181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.12:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:15.954252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.12:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:16.046882       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.12:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:16.046950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.12:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:16.318614       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.12:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:16.318701       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.12:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:16.362195       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.12:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:16.362278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.12:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:16.552891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.12:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:16.552967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.12:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:16.675379       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:16.675541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.12:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:16.847648       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.12:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:16.847711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.12:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:17.193947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:17.194015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.12:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:17.520707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.12:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0819 12:24:17.520752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.12:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.12:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:24:20.074404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 12:24:20.074601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 12:24:32.529621       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 12:26:05.498706       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lxx6f\": pod busybox-7dff88458-lxx6f is already assigned to node \"ha-584983-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lxx6f" node="ha-584983-m04"
	E0819 12:26:05.499659       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 446d34d8-df71-4b75-a398-726465cd2e30(default/busybox-7dff88458-lxx6f) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-lxx6f"
	E0819 12:26:05.499925       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lxx6f\": pod busybox-7dff88458-lxx6f is already assigned to node \"ha-584983-m04\"" pod="default/busybox-7dff88458-lxx6f"
	I0819 12:26:05.500161       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lxx6f" node="ha-584983-m04"
	
	
	==> kubelet <==
	Aug 19 12:27:08 ha-584983 kubelet[1315]: E0819 12:27:08.701631    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070428701068443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:27:08 ha-584983 kubelet[1315]: E0819 12:27:08.701733    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070428701068443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:27:18 ha-584983 kubelet[1315]: E0819 12:27:18.703534    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070438703232487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:27:18 ha-584983 kubelet[1315]: E0819 12:27:18.703572    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070438703232487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:27:28 ha-584983 kubelet[1315]: E0819 12:27:28.705500    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070448705163346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:27:28 ha-584983 kubelet[1315]: E0819 12:27:28.705759    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070448705163346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:27:38 ha-584983 kubelet[1315]: E0819 12:27:38.707354    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070458707010448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:27:38 ha-584983 kubelet[1315]: E0819 12:27:38.707415    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070458707010448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:27:48 ha-584983 kubelet[1315]: E0819 12:27:48.458309    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 12:27:48 ha-584983 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 12:27:48 ha-584983 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 12:27:48 ha-584983 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 12:27:48 ha-584983 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 12:27:48 ha-584983 kubelet[1315]: E0819 12:27:48.709097    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070468708859589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:27:48 ha-584983 kubelet[1315]: E0819 12:27:48.709139    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070468708859589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:27:58 ha-584983 kubelet[1315]: E0819 12:27:58.711505    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070478710399959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:27:58 ha-584983 kubelet[1315]: E0819 12:27:58.711788    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070478710399959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:28:08 ha-584983 kubelet[1315]: E0819 12:28:08.716225    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070488715517429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:28:08 ha-584983 kubelet[1315]: E0819 12:28:08.716794    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070488715517429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:28:18 ha-584983 kubelet[1315]: E0819 12:28:18.719570    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070498718708079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:28:18 ha-584983 kubelet[1315]: E0819 12:28:18.719619    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070498718708079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:28:28 ha-584983 kubelet[1315]: E0819 12:28:28.720875    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070508720397985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:28:28 ha-584983 kubelet[1315]: E0819 12:28:28.720917    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070508720397985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:28:38 ha-584983 kubelet[1315]: E0819 12:28:38.724602    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070518723096536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:28:38 ha-584983 kubelet[1315]: E0819 12:28:38.725173    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070518723096536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 12:28:42.219809 2113218 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19479-2086935/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-584983 -n ha-584983
helpers_test.go:261: (dbg) Run:  kubectl --context ha-584983 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (322.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-172739
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-172739
E0819 12:44:02.189171 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-172739: exit status 82 (2m1.914836112s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-172739-m03"  ...
	* Stopping node "multinode-172739-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-172739" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-172739 --wait=true -v=8 --alsologtostderr
E0819 12:46:26.668178 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:49:02.189620 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-172739 --wait=true -v=8 --alsologtostderr: (3m18.506695406s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-172739
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-172739 -n multinode-172739
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-172739 logs -n 25: (1.420296639s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-172739 cp multinode-172739-m02:/home/docker/cp-test.txt                       | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3931286913/001/cp-test_multinode-172739-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-172739 cp multinode-172739-m02:/home/docker/cp-test.txt                       | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739:/home/docker/cp-test_multinode-172739-m02_multinode-172739.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n multinode-172739 sudo cat                                       | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | /home/docker/cp-test_multinode-172739-m02_multinode-172739.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-172739 cp multinode-172739-m02:/home/docker/cp-test.txt                       | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m03:/home/docker/cp-test_multinode-172739-m02_multinode-172739-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n multinode-172739-m03 sudo cat                                   | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | /home/docker/cp-test_multinode-172739-m02_multinode-172739-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-172739 cp testdata/cp-test.txt                                                | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-172739 cp multinode-172739-m03:/home/docker/cp-test.txt                       | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3931286913/001/cp-test_multinode-172739-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-172739 cp multinode-172739-m03:/home/docker/cp-test.txt                       | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739:/home/docker/cp-test_multinode-172739-m03_multinode-172739.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n multinode-172739 sudo cat                                       | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | /home/docker/cp-test_multinode-172739-m03_multinode-172739.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-172739 cp multinode-172739-m03:/home/docker/cp-test.txt                       | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m02:/home/docker/cp-test_multinode-172739-m03_multinode-172739-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n multinode-172739-m02 sudo cat                                   | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | /home/docker/cp-test_multinode-172739-m03_multinode-172739-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-172739 node stop m03                                                          | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	| node    | multinode-172739 node start                                                             | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-172739                                                                | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC |                     |
	| stop    | -p multinode-172739                                                                     | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC |                     |
	| start   | -p multinode-172739                                                                     | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:45 UTC | 19 Aug 24 12:49 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-172739                                                                | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:49 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:45:50
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:45:50.001597 2122632 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:45:50.001741 2122632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:45:50.001750 2122632 out.go:358] Setting ErrFile to fd 2...
	I0819 12:45:50.001754 2122632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:45:50.001910 2122632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:45:50.002450 2122632 out.go:352] Setting JSON to false
	I0819 12:45:50.003427 2122632 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":163700,"bootTime":1723907850,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:45:50.003488 2122632 start.go:139] virtualization: kvm guest
	I0819 12:45:50.005648 2122632 out.go:177] * [multinode-172739] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:45:50.006882 2122632 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 12:45:50.006952 2122632 notify.go:220] Checking for updates...
	I0819 12:45:50.009036 2122632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:45:50.010169 2122632 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:45:50.011192 2122632 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:45:50.012362 2122632 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:45:50.013614 2122632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:45:50.015255 2122632 config.go:182] Loaded profile config "multinode-172739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:45:50.015371 2122632 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:45:50.016047 2122632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:45:50.016133 2122632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:45:50.031486 2122632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I0819 12:45:50.031953 2122632 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:45:50.032588 2122632 main.go:141] libmachine: Using API Version  1
	I0819 12:45:50.032621 2122632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:45:50.032985 2122632 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:45:50.033207 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:45:50.068813 2122632 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 12:45:50.070187 2122632 start.go:297] selected driver: kvm2
	I0819 12:45:50.070210 2122632 start.go:901] validating driver "kvm2" against &{Name:multinode-172739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-172739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.217 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:45:50.070378 2122632 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:45:50.070730 2122632 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:45:50.070802 2122632 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 12:45:50.086239 2122632 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 12:45:50.086923 2122632 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:45:50.086962 2122632 cni.go:84] Creating CNI manager for ""
	I0819 12:45:50.086974 2122632 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 12:45:50.087060 2122632 start.go:340] cluster config:
	{Name:multinode-172739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-172739 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.217 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:45:50.087221 2122632 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:45:50.088854 2122632 out.go:177] * Starting "multinode-172739" primary control-plane node in "multinode-172739" cluster
	I0819 12:45:50.090015 2122632 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:45:50.090053 2122632 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 12:45:50.090064 2122632 cache.go:56] Caching tarball of preloaded images
	I0819 12:45:50.090136 2122632 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:45:50.090147 2122632 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 12:45:50.090267 2122632 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/config.json ...
	I0819 12:45:50.090529 2122632 start.go:360] acquireMachinesLock for multinode-172739: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:45:50.090573 2122632 start.go:364] duration metric: took 24.835µs to acquireMachinesLock for "multinode-172739"
	I0819 12:45:50.090593 2122632 start.go:96] Skipping create...Using existing machine configuration
	I0819 12:45:50.090605 2122632 fix.go:54] fixHost starting: 
	I0819 12:45:50.090897 2122632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:45:50.090934 2122632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:45:50.105574 2122632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0819 12:45:50.106089 2122632 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:45:50.106632 2122632 main.go:141] libmachine: Using API Version  1
	I0819 12:45:50.106653 2122632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:45:50.106980 2122632 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:45:50.107188 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:45:50.107347 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetState
	I0819 12:45:50.109369 2122632 fix.go:112] recreateIfNeeded on multinode-172739: state=Running err=<nil>
	W0819 12:45:50.109392 2122632 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 12:45:50.111116 2122632 out.go:177] * Updating the running kvm2 "multinode-172739" VM ...
	I0819 12:45:50.112056 2122632 machine.go:93] provisionDockerMachine start ...
	I0819 12:45:50.112077 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:45:50.112335 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:45:50.114549 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.114891 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:45:50.114913 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.115034 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:45:50.115205 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.115386 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.115502 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:45:50.115658 2122632 main.go:141] libmachine: Using SSH client type: native
	I0819 12:45:50.115871 2122632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0819 12:45:50.115884 2122632 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 12:45:50.225446 2122632 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-172739
	
	I0819 12:45:50.225478 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetMachineName
	I0819 12:45:50.225748 2122632 buildroot.go:166] provisioning hostname "multinode-172739"
	I0819 12:45:50.225782 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetMachineName
	I0819 12:45:50.225997 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:45:50.228754 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.229150 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:45:50.229196 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.229448 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:45:50.229642 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.229795 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.229951 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:45:50.230224 2122632 main.go:141] libmachine: Using SSH client type: native
	I0819 12:45:50.230431 2122632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0819 12:45:50.230452 2122632 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-172739 && echo "multinode-172739" | sudo tee /etc/hostname
	I0819 12:45:50.349216 2122632 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-172739
	
	I0819 12:45:50.349247 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:45:50.352415 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.352863 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:45:50.352887 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.353050 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:45:50.353248 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.353430 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.353562 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:45:50.353742 2122632 main.go:141] libmachine: Using SSH client type: native
	I0819 12:45:50.353913 2122632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0819 12:45:50.353929 2122632 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-172739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-172739/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-172739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:45:50.461245 2122632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:45:50.461279 2122632 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 12:45:50.461317 2122632 buildroot.go:174] setting up certificates
	I0819 12:45:50.461330 2122632 provision.go:84] configureAuth start
	I0819 12:45:50.461340 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetMachineName
	I0819 12:45:50.461628 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetIP
	I0819 12:45:50.464189 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.464578 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:45:50.464608 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.464750 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:45:50.467159 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.467501 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:45:50.467528 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.467673 2122632 provision.go:143] copyHostCerts
	I0819 12:45:50.467706 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:45:50.467740 2122632 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 12:45:50.467758 2122632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:45:50.467826 2122632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 12:45:50.467928 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:45:50.467948 2122632 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 12:45:50.467958 2122632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:45:50.467989 2122632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 12:45:50.468048 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:45:50.468065 2122632 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 12:45:50.468071 2122632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:45:50.468093 2122632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 12:45:50.468153 2122632 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.multinode-172739 san=[127.0.0.1 192.168.39.191 localhost minikube multinode-172739]
	I0819 12:45:50.615163 2122632 provision.go:177] copyRemoteCerts
	I0819 12:45:50.615233 2122632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:45:50.615258 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:45:50.617938 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.618298 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:45:50.618327 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.618516 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:45:50.618740 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.619007 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:45:50.619158 2122632 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/multinode-172739/id_rsa Username:docker}
	I0819 12:45:50.703502 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 12:45:50.703596 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 12:45:50.740499 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 12:45:50.740566 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0819 12:45:50.765460 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 12:45:50.765545 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 12:45:50.798434 2122632 provision.go:87] duration metric: took 337.091056ms to configureAuth
	I0819 12:45:50.798471 2122632 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:45:50.798706 2122632 config.go:182] Loaded profile config "multinode-172739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:45:50.798788 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:45:50.802228 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.802676 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:45:50.802706 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.802896 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:45:50.803166 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.803386 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.803551 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:45:50.803735 2122632 main.go:141] libmachine: Using SSH client type: native
	I0819 12:45:50.803941 2122632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0819 12:45:50.803959 2122632 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:47:21.593879 2122632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:47:21.593913 2122632 machine.go:96] duration metric: took 1m31.481841616s to provisionDockerMachine
	I0819 12:47:21.593928 2122632 start.go:293] postStartSetup for "multinode-172739" (driver="kvm2")
	I0819 12:47:21.593940 2122632 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:47:21.593958 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:47:21.594331 2122632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:47:21.594366 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:47:21.597719 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.598327 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:47:21.598360 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.598539 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:47:21.598768 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:47:21.598944 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:47:21.599095 2122632 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/multinode-172739/id_rsa Username:docker}
	I0819 12:47:21.680513 2122632 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:47:21.684593 2122632 command_runner.go:130] > NAME=Buildroot
	I0819 12:47:21.684616 2122632 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 12:47:21.684623 2122632 command_runner.go:130] > ID=buildroot
	I0819 12:47:21.684630 2122632 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 12:47:21.684637 2122632 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 12:47:21.684800 2122632 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:47:21.684833 2122632 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 12:47:21.684939 2122632 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 12:47:21.685026 2122632 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 12:47:21.685037 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /etc/ssl/certs/20941932.pem
	I0819 12:47:21.685121 2122632 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:47:21.695757 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:47:21.719170 2122632 start.go:296] duration metric: took 125.226186ms for postStartSetup
	I0819 12:47:21.719218 2122632 fix.go:56] duration metric: took 1m31.628613971s for fixHost
	I0819 12:47:21.719245 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:47:21.722386 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.722881 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:47:21.722912 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.723043 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:47:21.723244 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:47:21.723416 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:47:21.723553 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:47:21.723735 2122632 main.go:141] libmachine: Using SSH client type: native
	I0819 12:47:21.723980 2122632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0819 12:47:21.723995 2122632 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:47:21.825010 2122632 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724071641.805085157
	
	I0819 12:47:21.825039 2122632 fix.go:216] guest clock: 1724071641.805085157
	I0819 12:47:21.825047 2122632 fix.go:229] Guest: 2024-08-19 12:47:21.805085157 +0000 UTC Remote: 2024-08-19 12:47:21.719225922 +0000 UTC m=+91.754348207 (delta=85.859235ms)
	I0819 12:47:21.825091 2122632 fix.go:200] guest clock delta is within tolerance: 85.859235ms
	I0819 12:47:21.825099 2122632 start.go:83] releasing machines lock for "multinode-172739", held for 1m31.734514797s
	I0819 12:47:21.825122 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:47:21.825401 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetIP
	I0819 12:47:21.828219 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.828630 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:47:21.828673 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.828811 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:47:21.829408 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:47:21.829623 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:47:21.829695 2122632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:47:21.829757 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:47:21.829852 2122632 ssh_runner.go:195] Run: cat /version.json
	I0819 12:47:21.829865 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:47:21.832383 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.832521 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.832758 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:47:21.832785 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.832954 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:47:21.832957 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:47:21.832995 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.833116 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:47:21.833142 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:47:21.833249 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:47:21.833307 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:47:21.833409 2122632 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/multinode-172739/id_rsa Username:docker}
	I0819 12:47:21.833473 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:47:21.833621 2122632 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/multinode-172739/id_rsa Username:docker}
	I0819 12:47:21.908484 2122632 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 12:47:21.908671 2122632 ssh_runner.go:195] Run: systemctl --version
	I0819 12:47:21.931014 2122632 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 12:47:21.931060 2122632 command_runner.go:130] > systemd 252 (252)
	I0819 12:47:21.931079 2122632 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 12:47:21.931145 2122632 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:47:22.088151 2122632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 12:47:22.096456 2122632 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 12:47:22.096878 2122632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:47:22.096982 2122632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:47:22.106949 2122632 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 12:47:22.106980 2122632 start.go:495] detecting cgroup driver to use...
	I0819 12:47:22.107053 2122632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:47:22.124135 2122632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:47:22.139253 2122632 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:47:22.139320 2122632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:47:22.153294 2122632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:47:22.166354 2122632 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:47:22.308668 2122632 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:47:22.449169 2122632 docker.go:233] disabling docker service ...
	I0819 12:47:22.449253 2122632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:47:22.467248 2122632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:47:22.481074 2122632 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:47:22.616590 2122632 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:47:22.752967 2122632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:47:22.766734 2122632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:47:22.784816 2122632 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0819 12:47:22.785104 2122632 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:47:22.785161 2122632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:47:22.795546 2122632 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:47:22.795596 2122632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:47:22.805810 2122632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:47:22.815787 2122632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:47:22.826472 2122632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:47:22.837613 2122632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:47:22.848715 2122632 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:47:22.860363 2122632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:47:22.871612 2122632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:47:22.881670 2122632 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 12:47:22.881756 2122632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:47:22.891441 2122632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:47:23.026272 2122632 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:47:23.264119 2122632 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:47:23.264207 2122632 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:47:23.269094 2122632 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0819 12:47:23.269122 2122632 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0819 12:47:23.269132 2122632 command_runner.go:130] > Device: 0,22	Inode: 1341        Links: 1
	I0819 12:47:23.269142 2122632 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 12:47:23.269150 2122632 command_runner.go:130] > Access: 2024-08-19 12:47:23.140406004 +0000
	I0819 12:47:23.269171 2122632 command_runner.go:130] > Modify: 2024-08-19 12:47:23.140406004 +0000
	I0819 12:47:23.269184 2122632 command_runner.go:130] > Change: 2024-08-19 12:47:23.140406004 +0000
	I0819 12:47:23.269189 2122632 command_runner.go:130] >  Birth: -
	I0819 12:47:23.269212 2122632 start.go:563] Will wait 60s for crictl version
	I0819 12:47:23.269257 2122632 ssh_runner.go:195] Run: which crictl
	I0819 12:47:23.273080 2122632 command_runner.go:130] > /usr/bin/crictl
	I0819 12:47:23.273150 2122632 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:47:23.313101 2122632 command_runner.go:130] > Version:  0.1.0
	I0819 12:47:23.313126 2122632 command_runner.go:130] > RuntimeName:  cri-o
	I0819 12:47:23.313132 2122632 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0819 12:47:23.313138 2122632 command_runner.go:130] > RuntimeApiVersion:  v1
	I0819 12:47:23.314304 2122632 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:47:23.314393 2122632 ssh_runner.go:195] Run: crio --version
	I0819 12:47:23.342426 2122632 command_runner.go:130] > crio version 1.29.1
	I0819 12:47:23.342458 2122632 command_runner.go:130] > Version:        1.29.1
	I0819 12:47:23.342467 2122632 command_runner.go:130] > GitCommit:      unknown
	I0819 12:47:23.342474 2122632 command_runner.go:130] > GitCommitDate:  unknown
	I0819 12:47:23.342481 2122632 command_runner.go:130] > GitTreeState:   clean
	I0819 12:47:23.342488 2122632 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 12:47:23.342492 2122632 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 12:47:23.342496 2122632 command_runner.go:130] > Compiler:       gc
	I0819 12:47:23.342508 2122632 command_runner.go:130] > Platform:       linux/amd64
	I0819 12:47:23.342512 2122632 command_runner.go:130] > Linkmode:       dynamic
	I0819 12:47:23.342519 2122632 command_runner.go:130] > BuildTags:      
	I0819 12:47:23.342524 2122632 command_runner.go:130] >   containers_image_ostree_stub
	I0819 12:47:23.342531 2122632 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 12:47:23.342537 2122632 command_runner.go:130] >   btrfs_noversion
	I0819 12:47:23.342544 2122632 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 12:47:23.342552 2122632 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 12:47:23.342558 2122632 command_runner.go:130] >   seccomp
	I0819 12:47:23.342576 2122632 command_runner.go:130] > LDFlags:          unknown
	I0819 12:47:23.342586 2122632 command_runner.go:130] > SeccompEnabled:   true
	I0819 12:47:23.342591 2122632 command_runner.go:130] > AppArmorEnabled:  false
	I0819 12:47:23.342673 2122632 ssh_runner.go:195] Run: crio --version
	I0819 12:47:23.368381 2122632 command_runner.go:130] > crio version 1.29.1
	I0819 12:47:23.368401 2122632 command_runner.go:130] > Version:        1.29.1
	I0819 12:47:23.368410 2122632 command_runner.go:130] > GitCommit:      unknown
	I0819 12:47:23.368414 2122632 command_runner.go:130] > GitCommitDate:  unknown
	I0819 12:47:23.368418 2122632 command_runner.go:130] > GitTreeState:   clean
	I0819 12:47:23.368425 2122632 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 12:47:23.368429 2122632 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 12:47:23.368433 2122632 command_runner.go:130] > Compiler:       gc
	I0819 12:47:23.368438 2122632 command_runner.go:130] > Platform:       linux/amd64
	I0819 12:47:23.368442 2122632 command_runner.go:130] > Linkmode:       dynamic
	I0819 12:47:23.368446 2122632 command_runner.go:130] > BuildTags:      
	I0819 12:47:23.368450 2122632 command_runner.go:130] >   containers_image_ostree_stub
	I0819 12:47:23.368454 2122632 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 12:47:23.368459 2122632 command_runner.go:130] >   btrfs_noversion
	I0819 12:47:23.368465 2122632 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 12:47:23.368472 2122632 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 12:47:23.368478 2122632 command_runner.go:130] >   seccomp
	I0819 12:47:23.368484 2122632 command_runner.go:130] > LDFlags:          unknown
	I0819 12:47:23.368490 2122632 command_runner.go:130] > SeccompEnabled:   true
	I0819 12:47:23.368499 2122632 command_runner.go:130] > AppArmorEnabled:  false
	I0819 12:47:23.371945 2122632 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:47:23.372866 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetIP
	I0819 12:47:23.375359 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:23.375725 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:47:23.375752 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:23.375946 2122632 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:47:23.380147 2122632 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0819 12:47:23.380267 2122632 kubeadm.go:883] updating cluster {Name:multinode-172739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-172739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.217 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:47:23.380443 2122632 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:47:23.380491 2122632 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:47:23.425487 2122632 command_runner.go:130] > {
	I0819 12:47:23.425519 2122632 command_runner.go:130] >   "images": [
	I0819 12:47:23.425527 2122632 command_runner.go:130] >     {
	I0819 12:47:23.425540 2122632 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 12:47:23.425548 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.425559 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 12:47:23.425565 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425573 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.425608 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 12:47:23.425624 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 12:47:23.425631 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425638 2122632 command_runner.go:130] >       "size": "87165492",
	I0819 12:47:23.425646 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.425652 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.425663 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.425670 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.425681 2122632 command_runner.go:130] >     },
	I0819 12:47:23.425687 2122632 command_runner.go:130] >     {
	I0819 12:47:23.425699 2122632 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 12:47:23.425708 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.425716 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 12:47:23.425724 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425728 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.425736 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 12:47:23.425745 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 12:47:23.425749 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425753 2122632 command_runner.go:130] >       "size": "87190579",
	I0819 12:47:23.425758 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.425766 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.425773 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.425778 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.425781 2122632 command_runner.go:130] >     },
	I0819 12:47:23.425787 2122632 command_runner.go:130] >     {
	I0819 12:47:23.425797 2122632 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 12:47:23.425802 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.425807 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 12:47:23.425811 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425818 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.425825 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 12:47:23.425832 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 12:47:23.425838 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425843 2122632 command_runner.go:130] >       "size": "1363676",
	I0819 12:47:23.425848 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.425854 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.425859 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.425866 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.425870 2122632 command_runner.go:130] >     },
	I0819 12:47:23.425877 2122632 command_runner.go:130] >     {
	I0819 12:47:23.425883 2122632 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 12:47:23.425890 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.425895 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 12:47:23.425902 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425906 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.425914 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 12:47:23.425927 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 12:47:23.425934 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425939 2122632 command_runner.go:130] >       "size": "31470524",
	I0819 12:47:23.425943 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.425947 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.425951 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.425956 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.425960 2122632 command_runner.go:130] >     },
	I0819 12:47:23.425963 2122632 command_runner.go:130] >     {
	I0819 12:47:23.425969 2122632 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 12:47:23.425975 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.425980 2122632 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 12:47:23.425986 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425991 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.425997 2122632 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 12:47:23.426007 2122632 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 12:47:23.426013 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426017 2122632 command_runner.go:130] >       "size": "61245718",
	I0819 12:47:23.426024 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.426028 2122632 command_runner.go:130] >       "username": "nonroot",
	I0819 12:47:23.426035 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.426039 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.426043 2122632 command_runner.go:130] >     },
	I0819 12:47:23.426046 2122632 command_runner.go:130] >     {
	I0819 12:47:23.426053 2122632 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 12:47:23.426059 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.426066 2122632 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 12:47:23.426073 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426077 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.426087 2122632 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 12:47:23.426094 2122632 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 12:47:23.426100 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426104 2122632 command_runner.go:130] >       "size": "149009664",
	I0819 12:47:23.426111 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.426115 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.426121 2122632 command_runner.go:130] >       },
	I0819 12:47:23.426126 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.426129 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.426133 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.426137 2122632 command_runner.go:130] >     },
	I0819 12:47:23.426141 2122632 command_runner.go:130] >     {
	I0819 12:47:23.426147 2122632 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 12:47:23.426153 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.426158 2122632 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 12:47:23.426165 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426169 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.426181 2122632 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 12:47:23.426191 2122632 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 12:47:23.426194 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426199 2122632 command_runner.go:130] >       "size": "95233506",
	I0819 12:47:23.426203 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.426212 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.426217 2122632 command_runner.go:130] >       },
	I0819 12:47:23.426221 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.426224 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.426228 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.426232 2122632 command_runner.go:130] >     },
	I0819 12:47:23.426236 2122632 command_runner.go:130] >     {
	I0819 12:47:23.426242 2122632 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 12:47:23.426249 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.426254 2122632 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 12:47:23.426259 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426265 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.426284 2122632 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 12:47:23.426294 2122632 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 12:47:23.426301 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426305 2122632 command_runner.go:130] >       "size": "89437512",
	I0819 12:47:23.426309 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.426314 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.426325 2122632 command_runner.go:130] >       },
	I0819 12:47:23.426330 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.426334 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.426338 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.426342 2122632 command_runner.go:130] >     },
	I0819 12:47:23.426345 2122632 command_runner.go:130] >     {
	I0819 12:47:23.426351 2122632 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 12:47:23.426355 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.426361 2122632 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 12:47:23.426364 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426368 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.426375 2122632 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 12:47:23.426385 2122632 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 12:47:23.426390 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426396 2122632 command_runner.go:130] >       "size": "92728217",
	I0819 12:47:23.426401 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.426406 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.426410 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.426414 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.426418 2122632 command_runner.go:130] >     },
	I0819 12:47:23.426421 2122632 command_runner.go:130] >     {
	I0819 12:47:23.426427 2122632 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 12:47:23.426433 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.426439 2122632 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 12:47:23.426446 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426450 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.426463 2122632 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 12:47:23.426472 2122632 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 12:47:23.426479 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426484 2122632 command_runner.go:130] >       "size": "68420936",
	I0819 12:47:23.426491 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.426496 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.426499 2122632 command_runner.go:130] >       },
	I0819 12:47:23.426505 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.426512 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.426517 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.426520 2122632 command_runner.go:130] >     },
	I0819 12:47:23.426524 2122632 command_runner.go:130] >     {
	I0819 12:47:23.426530 2122632 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 12:47:23.426537 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.426542 2122632 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 12:47:23.426546 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426552 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.426567 2122632 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 12:47:23.426582 2122632 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 12:47:23.426593 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426601 2122632 command_runner.go:130] >       "size": "742080",
	I0819 12:47:23.426612 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.426625 2122632 command_runner.go:130] >         "value": "65535"
	I0819 12:47:23.426635 2122632 command_runner.go:130] >       },
	I0819 12:47:23.426642 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.426649 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.426654 2122632 command_runner.go:130] >       "pinned": true
	I0819 12:47:23.426660 2122632 command_runner.go:130] >     }
	I0819 12:47:23.426663 2122632 command_runner.go:130] >   ]
	I0819 12:47:23.426667 2122632 command_runner.go:130] > }
	I0819 12:47:23.426899 2122632 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:47:23.426917 2122632 crio.go:433] Images already preloaded, skipping extraction
	I0819 12:47:23.426984 2122632 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:47:23.462359 2122632 command_runner.go:130] > {
	I0819 12:47:23.462394 2122632 command_runner.go:130] >   "images": [
	I0819 12:47:23.462399 2122632 command_runner.go:130] >     {
	I0819 12:47:23.462407 2122632 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 12:47:23.462413 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.462418 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 12:47:23.462422 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462426 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.462435 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 12:47:23.462441 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 12:47:23.462445 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462449 2122632 command_runner.go:130] >       "size": "87165492",
	I0819 12:47:23.462453 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.462457 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.462472 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.462486 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.462490 2122632 command_runner.go:130] >     },
	I0819 12:47:23.462493 2122632 command_runner.go:130] >     {
	I0819 12:47:23.462499 2122632 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 12:47:23.462503 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.462512 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 12:47:23.462518 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462522 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.462532 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 12:47:23.462539 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 12:47:23.462545 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462549 2122632 command_runner.go:130] >       "size": "87190579",
	I0819 12:47:23.462553 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.462564 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.462574 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.462581 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.462589 2122632 command_runner.go:130] >     },
	I0819 12:47:23.462595 2122632 command_runner.go:130] >     {
	I0819 12:47:23.462605 2122632 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 12:47:23.462614 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.462625 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 12:47:23.462633 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462639 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.462653 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 12:47:23.462668 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 12:47:23.462677 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462684 2122632 command_runner.go:130] >       "size": "1363676",
	I0819 12:47:23.462692 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.462697 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.462715 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.462721 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.462725 2122632 command_runner.go:130] >     },
	I0819 12:47:23.462731 2122632 command_runner.go:130] >     {
	I0819 12:47:23.462737 2122632 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 12:47:23.462741 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.462748 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 12:47:23.462754 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462758 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.462769 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 12:47:23.462789 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 12:47:23.462798 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462805 2122632 command_runner.go:130] >       "size": "31470524",
	I0819 12:47:23.462814 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.462824 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.462830 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.462838 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.462842 2122632 command_runner.go:130] >     },
	I0819 12:47:23.462846 2122632 command_runner.go:130] >     {
	I0819 12:47:23.462852 2122632 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 12:47:23.462859 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.462864 2122632 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 12:47:23.462869 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462873 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.462882 2122632 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 12:47:23.462892 2122632 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 12:47:23.462897 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462902 2122632 command_runner.go:130] >       "size": "61245718",
	I0819 12:47:23.462906 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.462911 2122632 command_runner.go:130] >       "username": "nonroot",
	I0819 12:47:23.462915 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.462920 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.462923 2122632 command_runner.go:130] >     },
	I0819 12:47:23.462927 2122632 command_runner.go:130] >     {
	I0819 12:47:23.462933 2122632 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 12:47:23.462937 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.462942 2122632 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 12:47:23.462946 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462950 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.462957 2122632 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 12:47:23.462966 2122632 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 12:47:23.462969 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462974 2122632 command_runner.go:130] >       "size": "149009664",
	I0819 12:47:23.462980 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.462984 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.462991 2122632 command_runner.go:130] >       },
	I0819 12:47:23.462997 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.463001 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.463007 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.463011 2122632 command_runner.go:130] >     },
	I0819 12:47:23.463110 2122632 command_runner.go:130] >     {
	I0819 12:47:23.463116 2122632 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 12:47:23.463123 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.463127 2122632 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 12:47:23.463131 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463135 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.463142 2122632 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 12:47:23.463152 2122632 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 12:47:23.463155 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463163 2122632 command_runner.go:130] >       "size": "95233506",
	I0819 12:47:23.463168 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.463174 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.463183 2122632 command_runner.go:130] >       },
	I0819 12:47:23.463190 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.463198 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.463204 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.463211 2122632 command_runner.go:130] >     },
	I0819 12:47:23.463216 2122632 command_runner.go:130] >     {
	I0819 12:47:23.463229 2122632 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 12:47:23.463238 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.463246 2122632 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 12:47:23.463254 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463260 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.463285 2122632 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 12:47:23.463301 2122632 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 12:47:23.463308 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463315 2122632 command_runner.go:130] >       "size": "89437512",
	I0819 12:47:23.463323 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.463329 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.463333 2122632 command_runner.go:130] >       },
	I0819 12:47:23.463338 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.463346 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.463352 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.463360 2122632 command_runner.go:130] >     },
	I0819 12:47:23.463366 2122632 command_runner.go:130] >     {
	I0819 12:47:23.463376 2122632 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 12:47:23.463386 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.463397 2122632 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 12:47:23.463404 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463408 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.463415 2122632 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 12:47:23.463429 2122632 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 12:47:23.463435 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463440 2122632 command_runner.go:130] >       "size": "92728217",
	I0819 12:47:23.463446 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.463452 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.463461 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.463468 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.463483 2122632 command_runner.go:130] >     },
	I0819 12:47:23.463491 2122632 command_runner.go:130] >     {
	I0819 12:47:23.463499 2122632 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 12:47:23.463505 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.463510 2122632 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 12:47:23.463516 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463520 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.463529 2122632 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 12:47:23.463540 2122632 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 12:47:23.463548 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463555 2122632 command_runner.go:130] >       "size": "68420936",
	I0819 12:47:23.463564 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.463571 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.463579 2122632 command_runner.go:130] >       },
	I0819 12:47:23.463585 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.463594 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.463602 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.463611 2122632 command_runner.go:130] >     },
	I0819 12:47:23.463616 2122632 command_runner.go:130] >     {
	I0819 12:47:23.463628 2122632 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 12:47:23.463637 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.463645 2122632 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 12:47:23.463653 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463659 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.463673 2122632 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 12:47:23.463687 2122632 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 12:47:23.463695 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463702 2122632 command_runner.go:130] >       "size": "742080",
	I0819 12:47:23.463710 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.463717 2122632 command_runner.go:130] >         "value": "65535"
	I0819 12:47:23.463725 2122632 command_runner.go:130] >       },
	I0819 12:47:23.463731 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.463743 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.463753 2122632 command_runner.go:130] >       "pinned": true
	I0819 12:47:23.463758 2122632 command_runner.go:130] >     }
	I0819 12:47:23.463767 2122632 command_runner.go:130] >   ]
	I0819 12:47:23.463772 2122632 command_runner.go:130] > }
	I0819 12:47:23.463926 2122632 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:47:23.463944 2122632 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:47:23.463959 2122632 kubeadm.go:934] updating node { 192.168.39.191 8443 v1.31.0 crio true true} ...
	I0819 12:47:23.464085 2122632 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-172739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-172739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:47:23.464156 2122632 ssh_runner.go:195] Run: crio config
	I0819 12:47:23.496728 2122632 command_runner.go:130] ! time="2024-08-19 12:47:23.476909496Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0819 12:47:23.502906 2122632 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0819 12:47:23.507650 2122632 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0819 12:47:23.507676 2122632 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0819 12:47:23.507683 2122632 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0819 12:47:23.507687 2122632 command_runner.go:130] > #
	I0819 12:47:23.507706 2122632 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0819 12:47:23.507717 2122632 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0819 12:47:23.507726 2122632 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0819 12:47:23.507740 2122632 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0819 12:47:23.507745 2122632 command_runner.go:130] > # reload'.
	I0819 12:47:23.507754 2122632 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0819 12:47:23.507767 2122632 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0819 12:47:23.507780 2122632 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0819 12:47:23.507790 2122632 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0819 12:47:23.507796 2122632 command_runner.go:130] > [crio]
	I0819 12:47:23.507808 2122632 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0819 12:47:23.507816 2122632 command_runner.go:130] > # containers images, in this directory.
	I0819 12:47:23.507826 2122632 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0819 12:47:23.507839 2122632 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0819 12:47:23.507847 2122632 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0819 12:47:23.507855 2122632 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0819 12:47:23.507862 2122632 command_runner.go:130] > # imagestore = ""
	I0819 12:47:23.507869 2122632 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0819 12:47:23.507877 2122632 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0819 12:47:23.507883 2122632 command_runner.go:130] > storage_driver = "overlay"
	I0819 12:47:23.507891 2122632 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0819 12:47:23.507897 2122632 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0819 12:47:23.507907 2122632 command_runner.go:130] > storage_option = [
	I0819 12:47:23.507913 2122632 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0819 12:47:23.507917 2122632 command_runner.go:130] > ]
	I0819 12:47:23.507923 2122632 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0819 12:47:23.507931 2122632 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0819 12:47:23.507936 2122632 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0819 12:47:23.507941 2122632 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0819 12:47:23.507949 2122632 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0819 12:47:23.507954 2122632 command_runner.go:130] > # always happen on a node reboot
	I0819 12:47:23.507960 2122632 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0819 12:47:23.507989 2122632 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0819 12:47:23.508002 2122632 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0819 12:47:23.508007 2122632 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0819 12:47:23.508014 2122632 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0819 12:47:23.508021 2122632 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0819 12:47:23.508030 2122632 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0819 12:47:23.508035 2122632 command_runner.go:130] > # internal_wipe = true
	I0819 12:47:23.508043 2122632 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0819 12:47:23.508050 2122632 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0819 12:47:23.508054 2122632 command_runner.go:130] > # internal_repair = false
	I0819 12:47:23.508062 2122632 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0819 12:47:23.508068 2122632 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0819 12:47:23.508075 2122632 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0819 12:47:23.508080 2122632 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0819 12:47:23.508088 2122632 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0819 12:47:23.508092 2122632 command_runner.go:130] > [crio.api]
	I0819 12:47:23.508097 2122632 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0819 12:47:23.508104 2122632 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0819 12:47:23.508109 2122632 command_runner.go:130] > # IP address on which the stream server will listen.
	I0819 12:47:23.508114 2122632 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0819 12:47:23.508120 2122632 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0819 12:47:23.508127 2122632 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0819 12:47:23.508131 2122632 command_runner.go:130] > # stream_port = "0"
	I0819 12:47:23.508140 2122632 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0819 12:47:23.508144 2122632 command_runner.go:130] > # stream_enable_tls = false
	I0819 12:47:23.508152 2122632 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0819 12:47:23.508156 2122632 command_runner.go:130] > # stream_idle_timeout = ""
	I0819 12:47:23.508182 2122632 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0819 12:47:23.508195 2122632 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0819 12:47:23.508200 2122632 command_runner.go:130] > # minutes.
	I0819 12:47:23.508207 2122632 command_runner.go:130] > # stream_tls_cert = ""
	I0819 12:47:23.508213 2122632 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0819 12:47:23.508223 2122632 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0819 12:47:23.508229 2122632 command_runner.go:130] > # stream_tls_key = ""
	I0819 12:47:23.508235 2122632 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0819 12:47:23.508243 2122632 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0819 12:47:23.508257 2122632 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0819 12:47:23.508263 2122632 command_runner.go:130] > # stream_tls_ca = ""
	I0819 12:47:23.508270 2122632 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 12:47:23.508276 2122632 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0819 12:47:23.508283 2122632 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 12:47:23.508289 2122632 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0819 12:47:23.508295 2122632 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0819 12:47:23.508302 2122632 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0819 12:47:23.508306 2122632 command_runner.go:130] > [crio.runtime]
	I0819 12:47:23.508312 2122632 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0819 12:47:23.508317 2122632 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0819 12:47:23.508321 2122632 command_runner.go:130] > # "nofile=1024:2048"
	I0819 12:47:23.508327 2122632 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0819 12:47:23.508333 2122632 command_runner.go:130] > # default_ulimits = [
	I0819 12:47:23.508337 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.508345 2122632 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0819 12:47:23.508351 2122632 command_runner.go:130] > # no_pivot = false
	I0819 12:47:23.508357 2122632 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0819 12:47:23.508363 2122632 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0819 12:47:23.508368 2122632 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0819 12:47:23.508373 2122632 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0819 12:47:23.508380 2122632 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0819 12:47:23.508386 2122632 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 12:47:23.508393 2122632 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0819 12:47:23.508398 2122632 command_runner.go:130] > # Cgroup setting for conmon
	I0819 12:47:23.508406 2122632 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0819 12:47:23.508411 2122632 command_runner.go:130] > conmon_cgroup = "pod"
	I0819 12:47:23.508418 2122632 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0819 12:47:23.508424 2122632 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0819 12:47:23.508434 2122632 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 12:47:23.508439 2122632 command_runner.go:130] > conmon_env = [
	I0819 12:47:23.508445 2122632 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 12:47:23.508448 2122632 command_runner.go:130] > ]
	I0819 12:47:23.508453 2122632 command_runner.go:130] > # Additional environment variables to set for all the
	I0819 12:47:23.508460 2122632 command_runner.go:130] > # containers. These are overridden if set in the
	I0819 12:47:23.508466 2122632 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0819 12:47:23.508479 2122632 command_runner.go:130] > # default_env = [
	I0819 12:47:23.508484 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.508490 2122632 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0819 12:47:23.508497 2122632 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0819 12:47:23.508503 2122632 command_runner.go:130] > # selinux = false
	I0819 12:47:23.508510 2122632 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0819 12:47:23.508518 2122632 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0819 12:47:23.508524 2122632 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0819 12:47:23.508528 2122632 command_runner.go:130] > # seccomp_profile = ""
	I0819 12:47:23.508533 2122632 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0819 12:47:23.508539 2122632 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0819 12:47:23.508547 2122632 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0819 12:47:23.508551 2122632 command_runner.go:130] > # which might increase security.
	I0819 12:47:23.508555 2122632 command_runner.go:130] > # This option is currently deprecated,
	I0819 12:47:23.508566 2122632 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0819 12:47:23.508576 2122632 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0819 12:47:23.508586 2122632 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0819 12:47:23.508598 2122632 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0819 12:47:23.508610 2122632 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0819 12:47:23.508622 2122632 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0819 12:47:23.508630 2122632 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:47:23.508639 2122632 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0819 12:47:23.508648 2122632 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0819 12:47:23.508657 2122632 command_runner.go:130] > # the cgroup blockio controller.
	I0819 12:47:23.508661 2122632 command_runner.go:130] > # blockio_config_file = ""
	I0819 12:47:23.508670 2122632 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0819 12:47:23.508675 2122632 command_runner.go:130] > # blockio parameters.
	I0819 12:47:23.508681 2122632 command_runner.go:130] > # blockio_reload = false
	I0819 12:47:23.508687 2122632 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0819 12:47:23.508693 2122632 command_runner.go:130] > # irqbalance daemon.
	I0819 12:47:23.508698 2122632 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0819 12:47:23.508709 2122632 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0819 12:47:23.508718 2122632 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0819 12:47:23.508724 2122632 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0819 12:47:23.508732 2122632 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0819 12:47:23.508738 2122632 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0819 12:47:23.508746 2122632 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:47:23.508750 2122632 command_runner.go:130] > # rdt_config_file = ""
	I0819 12:47:23.508758 2122632 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0819 12:47:23.508764 2122632 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0819 12:47:23.508784 2122632 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0819 12:47:23.508791 2122632 command_runner.go:130] > # separate_pull_cgroup = ""
	I0819 12:47:23.508797 2122632 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0819 12:47:23.508804 2122632 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0819 12:47:23.508808 2122632 command_runner.go:130] > # will be added.
	I0819 12:47:23.508812 2122632 command_runner.go:130] > # default_capabilities = [
	I0819 12:47:23.508815 2122632 command_runner.go:130] > # 	"CHOWN",
	I0819 12:47:23.508821 2122632 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0819 12:47:23.508827 2122632 command_runner.go:130] > # 	"FSETID",
	I0819 12:47:23.508830 2122632 command_runner.go:130] > # 	"FOWNER",
	I0819 12:47:23.508834 2122632 command_runner.go:130] > # 	"SETGID",
	I0819 12:47:23.508837 2122632 command_runner.go:130] > # 	"SETUID",
	I0819 12:47:23.508841 2122632 command_runner.go:130] > # 	"SETPCAP",
	I0819 12:47:23.508845 2122632 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0819 12:47:23.508848 2122632 command_runner.go:130] > # 	"KILL",
	I0819 12:47:23.508852 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.508859 2122632 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0819 12:47:23.508867 2122632 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0819 12:47:23.508872 2122632 command_runner.go:130] > # add_inheritable_capabilities = false
	I0819 12:47:23.508881 2122632 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0819 12:47:23.508889 2122632 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 12:47:23.508894 2122632 command_runner.go:130] > default_sysctls = [
	I0819 12:47:23.508899 2122632 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0819 12:47:23.508902 2122632 command_runner.go:130] > ]
	I0819 12:47:23.508907 2122632 command_runner.go:130] > # List of devices on the host that a
	I0819 12:47:23.508912 2122632 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0819 12:47:23.508918 2122632 command_runner.go:130] > # allowed_devices = [
	I0819 12:47:23.508922 2122632 command_runner.go:130] > # 	"/dev/fuse",
	I0819 12:47:23.508925 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.508930 2122632 command_runner.go:130] > # List of additional devices. specified as
	I0819 12:47:23.508939 2122632 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0819 12:47:23.508944 2122632 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0819 12:47:23.508954 2122632 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 12:47:23.508959 2122632 command_runner.go:130] > # additional_devices = [
	I0819 12:47:23.508962 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.508969 2122632 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0819 12:47:23.508973 2122632 command_runner.go:130] > # cdi_spec_dirs = [
	I0819 12:47:23.508979 2122632 command_runner.go:130] > # 	"/etc/cdi",
	I0819 12:47:23.508983 2122632 command_runner.go:130] > # 	"/var/run/cdi",
	I0819 12:47:23.508986 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.508991 2122632 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0819 12:47:23.508999 2122632 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0819 12:47:23.509003 2122632 command_runner.go:130] > # Defaults to false.
	I0819 12:47:23.509010 2122632 command_runner.go:130] > # device_ownership_from_security_context = false
	I0819 12:47:23.509016 2122632 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0819 12:47:23.509025 2122632 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0819 12:47:23.509031 2122632 command_runner.go:130] > # hooks_dir = [
	I0819 12:47:23.509036 2122632 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0819 12:47:23.509040 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.509046 2122632 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0819 12:47:23.509054 2122632 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0819 12:47:23.509059 2122632 command_runner.go:130] > # its default mounts from the following two files:
	I0819 12:47:23.509062 2122632 command_runner.go:130] > #
	I0819 12:47:23.509068 2122632 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0819 12:47:23.509076 2122632 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0819 12:47:23.509082 2122632 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0819 12:47:23.509088 2122632 command_runner.go:130] > #
	I0819 12:47:23.509093 2122632 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0819 12:47:23.509102 2122632 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0819 12:47:23.509108 2122632 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0819 12:47:23.509114 2122632 command_runner.go:130] > #      only add mounts it finds in this file.
	I0819 12:47:23.509118 2122632 command_runner.go:130] > #
	I0819 12:47:23.509124 2122632 command_runner.go:130] > # default_mounts_file = ""
	I0819 12:47:23.509129 2122632 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0819 12:47:23.509137 2122632 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0819 12:47:23.509141 2122632 command_runner.go:130] > pids_limit = 1024
	I0819 12:47:23.509146 2122632 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0819 12:47:23.509154 2122632 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0819 12:47:23.509161 2122632 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0819 12:47:23.509170 2122632 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0819 12:47:23.509174 2122632 command_runner.go:130] > # log_size_max = -1
	I0819 12:47:23.509180 2122632 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0819 12:47:23.509189 2122632 command_runner.go:130] > # log_to_journald = false
	I0819 12:47:23.509195 2122632 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0819 12:47:23.509200 2122632 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0819 12:47:23.509205 2122632 command_runner.go:130] > # Path to directory for container attach sockets.
	I0819 12:47:23.509212 2122632 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0819 12:47:23.509218 2122632 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0819 12:47:23.509225 2122632 command_runner.go:130] > # bind_mount_prefix = ""
	I0819 12:47:23.509230 2122632 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0819 12:47:23.509235 2122632 command_runner.go:130] > # read_only = false
	I0819 12:47:23.509242 2122632 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0819 12:47:23.509248 2122632 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0819 12:47:23.509254 2122632 command_runner.go:130] > # live configuration reload.
	I0819 12:47:23.509258 2122632 command_runner.go:130] > # log_level = "info"
	I0819 12:47:23.509263 2122632 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0819 12:47:23.509270 2122632 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:47:23.509274 2122632 command_runner.go:130] > # log_filter = ""
	I0819 12:47:23.509281 2122632 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0819 12:47:23.509288 2122632 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0819 12:47:23.509294 2122632 command_runner.go:130] > # separated by comma.
	I0819 12:47:23.509303 2122632 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:47:23.509309 2122632 command_runner.go:130] > # uid_mappings = ""
	I0819 12:47:23.509315 2122632 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0819 12:47:23.509320 2122632 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0819 12:47:23.509325 2122632 command_runner.go:130] > # separated by comma.
	I0819 12:47:23.509332 2122632 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:47:23.509338 2122632 command_runner.go:130] > # gid_mappings = ""
	I0819 12:47:23.509343 2122632 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0819 12:47:23.509350 2122632 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 12:47:23.509356 2122632 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 12:47:23.509365 2122632 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:47:23.509369 2122632 command_runner.go:130] > # minimum_mappable_uid = -1
	I0819 12:47:23.509375 2122632 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0819 12:47:23.509383 2122632 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 12:47:23.509389 2122632 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 12:47:23.509398 2122632 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:47:23.509405 2122632 command_runner.go:130] > # minimum_mappable_gid = -1
	I0819 12:47:23.509411 2122632 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0819 12:47:23.509416 2122632 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0819 12:47:23.509424 2122632 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0819 12:47:23.509428 2122632 command_runner.go:130] > # ctr_stop_timeout = 30
	I0819 12:47:23.509435 2122632 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0819 12:47:23.509441 2122632 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0819 12:47:23.509448 2122632 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0819 12:47:23.509453 2122632 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0819 12:47:23.509457 2122632 command_runner.go:130] > drop_infra_ctr = false
	I0819 12:47:23.509463 2122632 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0819 12:47:23.509474 2122632 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0819 12:47:23.509480 2122632 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0819 12:47:23.509486 2122632 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0819 12:47:23.509493 2122632 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0819 12:47:23.509500 2122632 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0819 12:47:23.509506 2122632 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0819 12:47:23.509513 2122632 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0819 12:47:23.509517 2122632 command_runner.go:130] > # shared_cpuset = ""
	I0819 12:47:23.509525 2122632 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0819 12:47:23.509531 2122632 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0819 12:47:23.509537 2122632 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0819 12:47:23.509544 2122632 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0819 12:47:23.509550 2122632 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0819 12:47:23.509555 2122632 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0819 12:47:23.509567 2122632 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0819 12:47:23.509573 2122632 command_runner.go:130] > # enable_criu_support = false
	I0819 12:47:23.509582 2122632 command_runner.go:130] > # Enable/disable the generation of the container,
	I0819 12:47:23.509594 2122632 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0819 12:47:23.509603 2122632 command_runner.go:130] > # enable_pod_events = false
	I0819 12:47:23.509612 2122632 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 12:47:23.509624 2122632 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 12:47:23.509635 2122632 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0819 12:47:23.509644 2122632 command_runner.go:130] > # default_runtime = "runc"
	I0819 12:47:23.509653 2122632 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0819 12:47:23.509663 2122632 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0819 12:47:23.509671 2122632 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0819 12:47:23.509682 2122632 command_runner.go:130] > # creation as a file is not desired either.
	I0819 12:47:23.509692 2122632 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0819 12:47:23.509699 2122632 command_runner.go:130] > # the hostname is being managed dynamically.
	I0819 12:47:23.509704 2122632 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0819 12:47:23.509709 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.509715 2122632 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0819 12:47:23.509723 2122632 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0819 12:47:23.509729 2122632 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0819 12:47:23.509736 2122632 command_runner.go:130] > # Each entry in the table should follow the format:
	I0819 12:47:23.509739 2122632 command_runner.go:130] > #
	I0819 12:47:23.509744 2122632 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0819 12:47:23.509751 2122632 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0819 12:47:23.509773 2122632 command_runner.go:130] > # runtime_type = "oci"
	I0819 12:47:23.509780 2122632 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0819 12:47:23.509784 2122632 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0819 12:47:23.509789 2122632 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0819 12:47:23.509794 2122632 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0819 12:47:23.509798 2122632 command_runner.go:130] > # monitor_env = []
	I0819 12:47:23.509803 2122632 command_runner.go:130] > # privileged_without_host_devices = false
	I0819 12:47:23.509809 2122632 command_runner.go:130] > # allowed_annotations = []
	I0819 12:47:23.509814 2122632 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0819 12:47:23.509819 2122632 command_runner.go:130] > # Where:
	I0819 12:47:23.509824 2122632 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0819 12:47:23.509831 2122632 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0819 12:47:23.509837 2122632 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0819 12:47:23.509845 2122632 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0819 12:47:23.509849 2122632 command_runner.go:130] > #   in $PATH.
	I0819 12:47:23.509855 2122632 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0819 12:47:23.509861 2122632 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0819 12:47:23.509868 2122632 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0819 12:47:23.509873 2122632 command_runner.go:130] > #   state.
	I0819 12:47:23.509879 2122632 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0819 12:47:23.509887 2122632 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0819 12:47:23.509908 2122632 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0819 12:47:23.509922 2122632 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0819 12:47:23.509928 2122632 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0819 12:47:23.509937 2122632 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0819 12:47:23.509945 2122632 command_runner.go:130] > #   The currently recognized values are:
	I0819 12:47:23.509953 2122632 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0819 12:47:23.509961 2122632 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0819 12:47:23.509969 2122632 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0819 12:47:23.509975 2122632 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0819 12:47:23.509982 2122632 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0819 12:47:23.509990 2122632 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0819 12:47:23.509998 2122632 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0819 12:47:23.510004 2122632 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0819 12:47:23.510012 2122632 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0819 12:47:23.510017 2122632 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0819 12:47:23.510024 2122632 command_runner.go:130] > #   deprecated option "conmon".
	I0819 12:47:23.510031 2122632 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0819 12:47:23.510038 2122632 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0819 12:47:23.510044 2122632 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0819 12:47:23.510051 2122632 command_runner.go:130] > #   should be moved to the container's cgroup
	I0819 12:47:23.510057 2122632 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0819 12:47:23.510061 2122632 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0819 12:47:23.510069 2122632 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0819 12:47:23.510074 2122632 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0819 12:47:23.510077 2122632 command_runner.go:130] > #
	I0819 12:47:23.510081 2122632 command_runner.go:130] > # Using the seccomp notifier feature:
	I0819 12:47:23.510084 2122632 command_runner.go:130] > #
	I0819 12:47:23.510089 2122632 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0819 12:47:23.510095 2122632 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0819 12:47:23.510098 2122632 command_runner.go:130] > #
	I0819 12:47:23.510104 2122632 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0819 12:47:23.510109 2122632 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0819 12:47:23.510112 2122632 command_runner.go:130] > #
	I0819 12:47:23.510117 2122632 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0819 12:47:23.510121 2122632 command_runner.go:130] > # feature.
	I0819 12:47:23.510123 2122632 command_runner.go:130] > #
	I0819 12:47:23.510129 2122632 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0819 12:47:23.510134 2122632 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0819 12:47:23.510140 2122632 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0819 12:47:23.510151 2122632 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0819 12:47:23.510157 2122632 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0819 12:47:23.510159 2122632 command_runner.go:130] > #
	I0819 12:47:23.510165 2122632 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0819 12:47:23.510170 2122632 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0819 12:47:23.510173 2122632 command_runner.go:130] > #
	I0819 12:47:23.510178 2122632 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0819 12:47:23.510185 2122632 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0819 12:47:23.510188 2122632 command_runner.go:130] > #
	I0819 12:47:23.510194 2122632 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0819 12:47:23.510199 2122632 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0819 12:47:23.510203 2122632 command_runner.go:130] > # limitation.
	I0819 12:47:23.510208 2122632 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0819 12:47:23.510214 2122632 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0819 12:47:23.510218 2122632 command_runner.go:130] > runtime_type = "oci"
	I0819 12:47:23.510223 2122632 command_runner.go:130] > runtime_root = "/run/runc"
	I0819 12:47:23.510230 2122632 command_runner.go:130] > runtime_config_path = ""
	I0819 12:47:23.510236 2122632 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0819 12:47:23.510242 2122632 command_runner.go:130] > monitor_cgroup = "pod"
	I0819 12:47:23.510247 2122632 command_runner.go:130] > monitor_exec_cgroup = ""
	I0819 12:47:23.510253 2122632 command_runner.go:130] > monitor_env = [
	I0819 12:47:23.510259 2122632 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 12:47:23.510264 2122632 command_runner.go:130] > ]
	I0819 12:47:23.510269 2122632 command_runner.go:130] > privileged_without_host_devices = false
	I0819 12:47:23.510275 2122632 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0819 12:47:23.510282 2122632 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0819 12:47:23.510288 2122632 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0819 12:47:23.510297 2122632 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0819 12:47:23.510304 2122632 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0819 12:47:23.510311 2122632 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0819 12:47:23.510320 2122632 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0819 12:47:23.510330 2122632 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0819 12:47:23.510336 2122632 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0819 12:47:23.510342 2122632 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0819 12:47:23.510345 2122632 command_runner.go:130] > # Example:
	I0819 12:47:23.510350 2122632 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0819 12:47:23.510354 2122632 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0819 12:47:23.510362 2122632 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0819 12:47:23.510366 2122632 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0819 12:47:23.510370 2122632 command_runner.go:130] > # cpuset = 0
	I0819 12:47:23.510373 2122632 command_runner.go:130] > # cpushares = "0-1"
	I0819 12:47:23.510376 2122632 command_runner.go:130] > # Where:
	I0819 12:47:23.510380 2122632 command_runner.go:130] > # The workload name is workload-type.
	I0819 12:47:23.510387 2122632 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0819 12:47:23.510392 2122632 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0819 12:47:23.510397 2122632 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0819 12:47:23.510405 2122632 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0819 12:47:23.510410 2122632 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0819 12:47:23.510414 2122632 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0819 12:47:23.510420 2122632 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0819 12:47:23.510424 2122632 command_runner.go:130] > # Default value is set to true
	I0819 12:47:23.510428 2122632 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0819 12:47:23.510433 2122632 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0819 12:47:23.510438 2122632 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0819 12:47:23.510442 2122632 command_runner.go:130] > # Default value is set to 'false'
	I0819 12:47:23.510447 2122632 command_runner.go:130] > # disable_hostport_mapping = false
	I0819 12:47:23.510453 2122632 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0819 12:47:23.510456 2122632 command_runner.go:130] > #
	I0819 12:47:23.510461 2122632 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0819 12:47:23.510466 2122632 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0819 12:47:23.510476 2122632 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0819 12:47:23.510481 2122632 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0819 12:47:23.510486 2122632 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0819 12:47:23.510489 2122632 command_runner.go:130] > [crio.image]
	I0819 12:47:23.510495 2122632 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0819 12:47:23.510499 2122632 command_runner.go:130] > # default_transport = "docker://"
	I0819 12:47:23.510504 2122632 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0819 12:47:23.510510 2122632 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0819 12:47:23.510516 2122632 command_runner.go:130] > # global_auth_file = ""
	I0819 12:47:23.510520 2122632 command_runner.go:130] > # The image used to instantiate infra containers.
	I0819 12:47:23.510525 2122632 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:47:23.510529 2122632 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0819 12:47:23.510536 2122632 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0819 12:47:23.510541 2122632 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0819 12:47:23.510546 2122632 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:47:23.510552 2122632 command_runner.go:130] > # pause_image_auth_file = ""
	I0819 12:47:23.510559 2122632 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0819 12:47:23.510568 2122632 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0819 12:47:23.510577 2122632 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0819 12:47:23.510586 2122632 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0819 12:47:23.510595 2122632 command_runner.go:130] > # pause_command = "/pause"
	I0819 12:47:23.510604 2122632 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0819 12:47:23.510616 2122632 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0819 12:47:23.510627 2122632 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0819 12:47:23.510639 2122632 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0819 12:47:23.510652 2122632 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0819 12:47:23.510661 2122632 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0819 12:47:23.510665 2122632 command_runner.go:130] > # pinned_images = [
	I0819 12:47:23.510668 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.510674 2122632 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0819 12:47:23.510682 2122632 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0819 12:47:23.510690 2122632 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0819 12:47:23.510698 2122632 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0819 12:47:23.510704 2122632 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0819 12:47:23.510710 2122632 command_runner.go:130] > # signature_policy = ""
	I0819 12:47:23.510715 2122632 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0819 12:47:23.510724 2122632 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0819 12:47:23.510730 2122632 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0819 12:47:23.510736 2122632 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0819 12:47:23.510742 2122632 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0819 12:47:23.510748 2122632 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0819 12:47:23.510754 2122632 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0819 12:47:23.510762 2122632 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0819 12:47:23.510766 2122632 command_runner.go:130] > # changing them here.
	I0819 12:47:23.510772 2122632 command_runner.go:130] > # insecure_registries = [
	I0819 12:47:23.510776 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.510782 2122632 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0819 12:47:23.510789 2122632 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0819 12:47:23.510793 2122632 command_runner.go:130] > # image_volumes = "mkdir"
	I0819 12:47:23.510799 2122632 command_runner.go:130] > # Temporary directory to use for storing big files
	I0819 12:47:23.510804 2122632 command_runner.go:130] > # big_files_temporary_dir = ""
	I0819 12:47:23.510815 2122632 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0819 12:47:23.510818 2122632 command_runner.go:130] > # CNI plugins.
	I0819 12:47:23.510822 2122632 command_runner.go:130] > [crio.network]
	I0819 12:47:23.510827 2122632 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0819 12:47:23.510833 2122632 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0819 12:47:23.510837 2122632 command_runner.go:130] > # cni_default_network = ""
	I0819 12:47:23.510842 2122632 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0819 12:47:23.510849 2122632 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0819 12:47:23.510855 2122632 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0819 12:47:23.510861 2122632 command_runner.go:130] > # plugin_dirs = [
	I0819 12:47:23.510864 2122632 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0819 12:47:23.510867 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.510873 2122632 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0819 12:47:23.510877 2122632 command_runner.go:130] > [crio.metrics]
	I0819 12:47:23.510882 2122632 command_runner.go:130] > # Globally enable or disable metrics support.
	I0819 12:47:23.510888 2122632 command_runner.go:130] > enable_metrics = true
	I0819 12:47:23.510894 2122632 command_runner.go:130] > # Specify enabled metrics collectors.
	I0819 12:47:23.510899 2122632 command_runner.go:130] > # Per default all metrics are enabled.
	I0819 12:47:23.510905 2122632 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0819 12:47:23.510913 2122632 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0819 12:47:23.510920 2122632 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0819 12:47:23.510927 2122632 command_runner.go:130] > # metrics_collectors = [
	I0819 12:47:23.510931 2122632 command_runner.go:130] > # 	"operations",
	I0819 12:47:23.510935 2122632 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0819 12:47:23.510942 2122632 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0819 12:47:23.510946 2122632 command_runner.go:130] > # 	"operations_errors",
	I0819 12:47:23.510950 2122632 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0819 12:47:23.510954 2122632 command_runner.go:130] > # 	"image_pulls_by_name",
	I0819 12:47:23.510958 2122632 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0819 12:47:23.510963 2122632 command_runner.go:130] > # 	"image_pulls_failures",
	I0819 12:47:23.510967 2122632 command_runner.go:130] > # 	"image_pulls_successes",
	I0819 12:47:23.510974 2122632 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0819 12:47:23.510978 2122632 command_runner.go:130] > # 	"image_layer_reuse",
	I0819 12:47:23.510984 2122632 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0819 12:47:23.510988 2122632 command_runner.go:130] > # 	"containers_oom_total",
	I0819 12:47:23.510994 2122632 command_runner.go:130] > # 	"containers_oom",
	I0819 12:47:23.510998 2122632 command_runner.go:130] > # 	"processes_defunct",
	I0819 12:47:23.511004 2122632 command_runner.go:130] > # 	"operations_total",
	I0819 12:47:23.511008 2122632 command_runner.go:130] > # 	"operations_latency_seconds",
	I0819 12:47:23.511013 2122632 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0819 12:47:23.511016 2122632 command_runner.go:130] > # 	"operations_errors_total",
	I0819 12:47:23.511020 2122632 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0819 12:47:23.511025 2122632 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0819 12:47:23.511028 2122632 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0819 12:47:23.511033 2122632 command_runner.go:130] > # 	"image_pulls_success_total",
	I0819 12:47:23.511040 2122632 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0819 12:47:23.511046 2122632 command_runner.go:130] > # 	"containers_oom_count_total",
	I0819 12:47:23.511050 2122632 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0819 12:47:23.511057 2122632 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0819 12:47:23.511061 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.511068 2122632 command_runner.go:130] > # The port on which the metrics server will listen.
	I0819 12:47:23.511072 2122632 command_runner.go:130] > # metrics_port = 9090
	I0819 12:47:23.511081 2122632 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0819 12:47:23.511085 2122632 command_runner.go:130] > # metrics_socket = ""
	I0819 12:47:23.511090 2122632 command_runner.go:130] > # The certificate for the secure metrics server.
	I0819 12:47:23.511098 2122632 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0819 12:47:23.511105 2122632 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0819 12:47:23.511110 2122632 command_runner.go:130] > # certificate on any modification event.
	I0819 12:47:23.511115 2122632 command_runner.go:130] > # metrics_cert = ""
	I0819 12:47:23.511121 2122632 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0819 12:47:23.511128 2122632 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0819 12:47:23.511132 2122632 command_runner.go:130] > # metrics_key = ""
	I0819 12:47:23.511138 2122632 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0819 12:47:23.511142 2122632 command_runner.go:130] > [crio.tracing]
	I0819 12:47:23.511147 2122632 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0819 12:47:23.511153 2122632 command_runner.go:130] > # enable_tracing = false
	I0819 12:47:23.511158 2122632 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0819 12:47:23.511164 2122632 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0819 12:47:23.511171 2122632 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0819 12:47:23.511176 2122632 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0819 12:47:23.511180 2122632 command_runner.go:130] > # CRI-O NRI configuration.
	I0819 12:47:23.511186 2122632 command_runner.go:130] > [crio.nri]
	I0819 12:47:23.511190 2122632 command_runner.go:130] > # Globally enable or disable NRI.
	I0819 12:47:23.511194 2122632 command_runner.go:130] > # enable_nri = false
	I0819 12:47:23.511197 2122632 command_runner.go:130] > # NRI socket to listen on.
	I0819 12:47:23.511202 2122632 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0819 12:47:23.511206 2122632 command_runner.go:130] > # NRI plugin directory to use.
	I0819 12:47:23.511210 2122632 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0819 12:47:23.511217 2122632 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0819 12:47:23.511223 2122632 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0819 12:47:23.511231 2122632 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0819 12:47:23.511235 2122632 command_runner.go:130] > # nri_disable_connections = false
	I0819 12:47:23.511240 2122632 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0819 12:47:23.511247 2122632 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0819 12:47:23.511252 2122632 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0819 12:47:23.511258 2122632 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0819 12:47:23.511264 2122632 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0819 12:47:23.511270 2122632 command_runner.go:130] > [crio.stats]
	I0819 12:47:23.511278 2122632 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0819 12:47:23.511285 2122632 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0819 12:47:23.511289 2122632 command_runner.go:130] > # stats_collection_period = 0
	I0819 12:47:23.511411 2122632 cni.go:84] Creating CNI manager for ""
	I0819 12:47:23.511421 2122632 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 12:47:23.511430 2122632 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:47:23.511456 2122632 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.191 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-172739 NodeName:multinode-172739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:47:23.511606 2122632 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-172739"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:47:23.511686 2122632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:47:23.522060 2122632 command_runner.go:130] > kubeadm
	I0819 12:47:23.522083 2122632 command_runner.go:130] > kubectl
	I0819 12:47:23.522087 2122632 command_runner.go:130] > kubelet
	I0819 12:47:23.522111 2122632 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:47:23.522159 2122632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 12:47:23.531890 2122632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0819 12:47:23.548391 2122632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:47:23.564392 2122632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0819 12:47:23.580579 2122632 ssh_runner.go:195] Run: grep 192.168.39.191	control-plane.minikube.internal$ /etc/hosts
	I0819 12:47:23.584623 2122632 command_runner.go:130] > 192.168.39.191	control-plane.minikube.internal
	I0819 12:47:23.584873 2122632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:47:23.727462 2122632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:47:23.742012 2122632 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739 for IP: 192.168.39.191
	I0819 12:47:23.742039 2122632 certs.go:194] generating shared ca certs ...
	I0819 12:47:23.742056 2122632 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:47:23.742220 2122632 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 12:47:23.742259 2122632 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 12:47:23.742281 2122632 certs.go:256] generating profile certs ...
	I0819 12:47:23.742366 2122632 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/client.key
	I0819 12:47:23.742422 2122632 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/apiserver.key.39ec7326
	I0819 12:47:23.742456 2122632 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/proxy-client.key
	I0819 12:47:23.742468 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 12:47:23.742482 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 12:47:23.742495 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 12:47:23.742505 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 12:47:23.742515 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 12:47:23.742529 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 12:47:23.742541 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 12:47:23.742551 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 12:47:23.742612 2122632 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 12:47:23.742640 2122632 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 12:47:23.742652 2122632 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:47:23.742674 2122632 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 12:47:23.742698 2122632 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:47:23.742720 2122632 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 12:47:23.742757 2122632 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:47:23.742784 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /usr/share/ca-certificates/20941932.pem
	I0819 12:47:23.742797 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:47:23.742809 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem -> /usr/share/ca-certificates/2094193.pem
	I0819 12:47:23.743473 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:47:23.767807 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:47:23.791662 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:47:23.814007 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 12:47:23.839282 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 12:47:23.863141 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 12:47:23.886057 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:47:23.908154 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 12:47:23.931063 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 12:47:23.953654 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:47:23.976541 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 12:47:23.999770 2122632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:47:24.015470 2122632 ssh_runner.go:195] Run: openssl version
	I0819 12:47:24.021002 2122632 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0819 12:47:24.021248 2122632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 12:47:24.032088 2122632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 12:47:24.036816 2122632 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 12:47:24.036844 2122632 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 12:47:24.036885 2122632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 12:47:24.042350 2122632 command_runner.go:130] > 3ec20f2e
	I0819 12:47:24.042596 2122632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:47:24.052753 2122632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:47:24.063878 2122632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:47:24.068612 2122632 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:47:24.068649 2122632 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:47:24.068705 2122632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:47:24.074458 2122632 command_runner.go:130] > b5213941
	I0819 12:47:24.074534 2122632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:47:24.084100 2122632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 12:47:24.095072 2122632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 12:47:24.099623 2122632 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 12:47:24.099646 2122632 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 12:47:24.099682 2122632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 12:47:24.105374 2122632 command_runner.go:130] > 51391683
	I0819 12:47:24.105426 2122632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 12:47:24.115460 2122632 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:47:24.120300 2122632 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:47:24.120322 2122632 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0819 12:47:24.120328 2122632 command_runner.go:130] > Device: 253,1	Inode: 9432598     Links: 1
	I0819 12:47:24.120335 2122632 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 12:47:24.120341 2122632 command_runner.go:130] > Access: 2024-08-19 12:40:47.553439181 +0000
	I0819 12:47:24.120345 2122632 command_runner.go:130] > Modify: 2024-08-19 12:40:47.553439181 +0000
	I0819 12:47:24.120350 2122632 command_runner.go:130] > Change: 2024-08-19 12:40:47.553439181 +0000
	I0819 12:47:24.120355 2122632 command_runner.go:130] >  Birth: 2024-08-19 12:40:47.553439181 +0000
	I0819 12:47:24.120410 2122632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 12:47:24.126498 2122632 command_runner.go:130] > Certificate will not expire
	I0819 12:47:24.126568 2122632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 12:47:24.132291 2122632 command_runner.go:130] > Certificate will not expire
	I0819 12:47:24.132357 2122632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 12:47:24.137891 2122632 command_runner.go:130] > Certificate will not expire
	I0819 12:47:24.138071 2122632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 12:47:24.143691 2122632 command_runner.go:130] > Certificate will not expire
	I0819 12:47:24.143926 2122632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 12:47:24.149226 2122632 command_runner.go:130] > Certificate will not expire
	I0819 12:47:24.149427 2122632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 12:47:24.154533 2122632 command_runner.go:130] > Certificate will not expire
	I0819 12:47:24.154726 2122632 kubeadm.go:392] StartCluster: {Name:multinode-172739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-172739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.217 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:47:24.154836 2122632 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:47:24.154887 2122632 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:47:24.196012 2122632 command_runner.go:130] > ef89dd32791ace6948049465b6fa9ff69f297da02f0e57ab2749972773f974b1
	I0819 12:47:24.196035 2122632 command_runner.go:130] > e79879fc5f7a855a646ccd2395da2bb8633972d08001e288a10f55de9441370e
	I0819 12:47:24.196041 2122632 command_runner.go:130] > 7375e5f48c3e1922e01c6a92d400b27c989817d0fa0650730ab31b9b0649ca60
	I0819 12:47:24.196048 2122632 command_runner.go:130] > 7793674a49e92acea9566bbb2e201eee3db34fe55161fcc7f16361b34bb892f2
	I0819 12:47:24.196053 2122632 command_runner.go:130] > 7d77d11ed2ed239056e8d2381737b2f837bafc1374a232f84fc945cb17f7cf34
	I0819 12:47:24.196059 2122632 command_runner.go:130] > 0aa9dce507ce827b16bdfa814e609b17fbcb211c68cedcb54c55ed58ba3c32f8
	I0819 12:47:24.196064 2122632 command_runner.go:130] > 1aa63e92b723d8c7ce76bde6151410872c79cc083284faeece6386610063d5c5
	I0819 12:47:24.196070 2122632 command_runner.go:130] > 0b1e3f91d8aef93204d65740971434e447791014a0c8ceb97b13cdd87bb6a81a
	I0819 12:47:24.196089 2122632 cri.go:89] found id: "ef89dd32791ace6948049465b6fa9ff69f297da02f0e57ab2749972773f974b1"
	I0819 12:47:24.196096 2122632 cri.go:89] found id: "e79879fc5f7a855a646ccd2395da2bb8633972d08001e288a10f55de9441370e"
	I0819 12:47:24.196098 2122632 cri.go:89] found id: "7375e5f48c3e1922e01c6a92d400b27c989817d0fa0650730ab31b9b0649ca60"
	I0819 12:47:24.196101 2122632 cri.go:89] found id: "7793674a49e92acea9566bbb2e201eee3db34fe55161fcc7f16361b34bb892f2"
	I0819 12:47:24.196105 2122632 cri.go:89] found id: "7d77d11ed2ed239056e8d2381737b2f837bafc1374a232f84fc945cb17f7cf34"
	I0819 12:47:24.196108 2122632 cri.go:89] found id: "0aa9dce507ce827b16bdfa814e609b17fbcb211c68cedcb54c55ed58ba3c32f8"
	I0819 12:47:24.196112 2122632 cri.go:89] found id: "1aa63e92b723d8c7ce76bde6151410872c79cc083284faeece6386610063d5c5"
	I0819 12:47:24.196114 2122632 cri.go:89] found id: "0b1e3f91d8aef93204d65740971434e447791014a0c8ceb97b13cdd87bb6a81a"
	I0819 12:47:24.196117 2122632 cri.go:89] found id: ""
	I0819 12:47:24.196183 2122632 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.190933504Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ded1a75-8d5d-4d8f-86d1-8d2ab6e6aa03 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.191926698Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d585818a-e4cc-4538-b49a-ea1fae4bff0d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.192403986Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071749192381823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d585818a-e4cc-4538-b49a-ea1fae4bff0d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.193056714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=baf33db9-21e8-441d-9e09-45c018459161 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.193156799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=baf33db9-21e8-441d-9e09-45c018459161 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.193543709Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:496c26d69901995c5e2231b98d05573ce313f39c875a7d294400ba35cfb2fc1f,PodSandboxId:8609851bb92819a66bcdbfbfccf4be489e7ea39c976d94a39a0898eb5f59225b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724071684501085758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f25641c5dfdeaca4e6af72814159890c9918206022279e7ee42004d2b026e8,PodSandboxId:92c9401559c45a85af60134f10d5cb479a3f10a7b1f46c560e91fb0b122aeb20,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724071651004730213,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ac994ca741441c4d9708cfa516e5f9af77a574bff2354d97fa3a811fbd805,PodSandboxId:d5f63d9663d016557990bfae142e1c40cc468e4b03cb7d7f76023b3489a8d61a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724071650827599812,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f91dcff6705ba3104b83628107b85b8fb0f68c565606cad42a7c8558444137b,PodSandboxId:dfec30fb619178aa1b8c5481ec83faf27aa1f02028a3216c2b906ef09e532f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724071650727807324,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f45a9788fa14f30429a3871fb5ff01c1f5b1850306cf3134ab9ec548d1f82f7c,PodSandboxId:90aea072eba447fc94ff5e110bf586fb063492754fb27b30a599f744c3c6b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724071650732810432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843131df4f0f074845c8458a5446a2d1c2104bae712abe9217b607bfc2b6502b,PodSandboxId:dfbe5dea075fc8cb7b86d66caa2911044f9b4aed80db2375365e8e718fcd6b20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724071646976303352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa230465a97a678916003722e53dd91f2f5e69d09ed2e1c5865d0e01e59da5ab,PodSandboxId:5547b42c380c3a66a04e3f38cc236430062a78c953b578cafd86e8f4ab0c9c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724071646944390716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8764e282807eac82774573f72ca4f5fe102a3f50de20300fde498db65f12945,PodSandboxId:958b3261f0964a2f792d05f976feffb383e3f7287ea314c8b44d385f622de8c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724071646940441880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a125261250c55,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57507b47037f0eb0bc4b3eff7191a5b232c1280600d3c7530d4478d3f586579c,PodSandboxId:2c552218a9055f5306794c85ef2e759c110b19823fa311e0375b517a345e9292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724071646854567972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def9c00aa12ed253534a300d71a36ada81123c6e70d89473e9fafca93b4b4170,PodSandboxId:657161da2edeb8fc288056ecddc3f84358aed7ab29d98300c15f714d982aa557,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724071328984594419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef89dd32791ace6948049465b6fa9ff69f297da02f0e57ab2749972773f974b1,PodSandboxId:0ca70924c14b8fb350aee188c1f707c96f8b17fbab6f257069b83ce995bcfef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724071275922279501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79879fc5f7a855a646ccd2395da2bb8633972d08001e288a10f55de9441370e,PodSandboxId:efacb57a7054f8e2e52e801e90833527dd047852ea29122eab8df71d466c2d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724071275881188549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7375e5f48c3e1922e01c6a92d400b27c989817d0fa0650730ab31b9b0649ca60,PodSandboxId:ccc35bddd76b97f0ec29b1d86b06c4bacc3e9e4d26fc455f583a9830be069fa1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724071263954702806,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7793674a49e92acea9566bbb2e201eee3db34fe55161fcc7f16361b34bb892f2,PodSandboxId:856dfb5095308d115b49a33477d891d338582db4a3c83f112507a432cdc8164c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724071263835767303,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa63e92b723d8c7ce76bde6151410872c79cc083284faeece6386610063d5c5,PodSandboxId:d90b496120c69d3d051d8e1608241ad24a491520f40b39f030fe085ba65078e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724071251148429212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d77d11ed2ed239056e8d2381737b2f837bafc1374a232f84fc945cb17f7cf34,PodSandboxId:7e51c80fc0d735dac0215f6993cd411e6ef2c37d4fc7fda7b4b641f441ae27a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724071251169075087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a
125261250c55,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aa9dce507ce827b16bdfa814e609b17fbcb211c68cedcb54c55ed58ba3c32f8,PodSandboxId:222b28c86955163db5fb227b33f303ab1c6490380342ae86a5a8a333105b1980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724071251149722561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1e3f91d8aef93204d65740971434e447791014a0c8ceb97b13cdd87bb6a81a,PodSandboxId:33f130edf5fccdd8125b2bb222b273b8ab5e225a859ff6cf5d7fb6e863c25a54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724071251111750204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=baf33db9-21e8-441d-9e09-45c018459161 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.235173714Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5be720bc-9fb0-4189-897c-0d20c4354c14 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.235247215Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5be720bc-9fb0-4189-897c-0d20c4354c14 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.236479239Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=359d824e-6967-4a75-8a77-92d91e592a89 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.236985481Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071749236963656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=359d824e-6967-4a75-8a77-92d91e592a89 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.237527867Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=593ca33a-706f-4939-a74e-21a2cd5e335d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.237578276Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=593ca33a-706f-4939-a74e-21a2cd5e335d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.240012069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:496c26d69901995c5e2231b98d05573ce313f39c875a7d294400ba35cfb2fc1f,PodSandboxId:8609851bb92819a66bcdbfbfccf4be489e7ea39c976d94a39a0898eb5f59225b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724071684501085758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f25641c5dfdeaca4e6af72814159890c9918206022279e7ee42004d2b026e8,PodSandboxId:92c9401559c45a85af60134f10d5cb479a3f10a7b1f46c560e91fb0b122aeb20,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724071651004730213,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ac994ca741441c4d9708cfa516e5f9af77a574bff2354d97fa3a811fbd805,PodSandboxId:d5f63d9663d016557990bfae142e1c40cc468e4b03cb7d7f76023b3489a8d61a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724071650827599812,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f91dcff6705ba3104b83628107b85b8fb0f68c565606cad42a7c8558444137b,PodSandboxId:dfec30fb619178aa1b8c5481ec83faf27aa1f02028a3216c2b906ef09e532f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724071650727807324,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f45a9788fa14f30429a3871fb5ff01c1f5b1850306cf3134ab9ec548d1f82f7c,PodSandboxId:90aea072eba447fc94ff5e110bf586fb063492754fb27b30a599f744c3c6b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724071650732810432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843131df4f0f074845c8458a5446a2d1c2104bae712abe9217b607bfc2b6502b,PodSandboxId:dfbe5dea075fc8cb7b86d66caa2911044f9b4aed80db2375365e8e718fcd6b20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724071646976303352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa230465a97a678916003722e53dd91f2f5e69d09ed2e1c5865d0e01e59da5ab,PodSandboxId:5547b42c380c3a66a04e3f38cc236430062a78c953b578cafd86e8f4ab0c9c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724071646944390716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8764e282807eac82774573f72ca4f5fe102a3f50de20300fde498db65f12945,PodSandboxId:958b3261f0964a2f792d05f976feffb383e3f7287ea314c8b44d385f622de8c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724071646940441880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a125261250c55,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57507b47037f0eb0bc4b3eff7191a5b232c1280600d3c7530d4478d3f586579c,PodSandboxId:2c552218a9055f5306794c85ef2e759c110b19823fa311e0375b517a345e9292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724071646854567972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def9c00aa12ed253534a300d71a36ada81123c6e70d89473e9fafca93b4b4170,PodSandboxId:657161da2edeb8fc288056ecddc3f84358aed7ab29d98300c15f714d982aa557,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724071328984594419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef89dd32791ace6948049465b6fa9ff69f297da02f0e57ab2749972773f974b1,PodSandboxId:0ca70924c14b8fb350aee188c1f707c96f8b17fbab6f257069b83ce995bcfef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724071275922279501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79879fc5f7a855a646ccd2395da2bb8633972d08001e288a10f55de9441370e,PodSandboxId:efacb57a7054f8e2e52e801e90833527dd047852ea29122eab8df71d466c2d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724071275881188549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7375e5f48c3e1922e01c6a92d400b27c989817d0fa0650730ab31b9b0649ca60,PodSandboxId:ccc35bddd76b97f0ec29b1d86b06c4bacc3e9e4d26fc455f583a9830be069fa1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724071263954702806,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7793674a49e92acea9566bbb2e201eee3db34fe55161fcc7f16361b34bb892f2,PodSandboxId:856dfb5095308d115b49a33477d891d338582db4a3c83f112507a432cdc8164c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724071263835767303,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa63e92b723d8c7ce76bde6151410872c79cc083284faeece6386610063d5c5,PodSandboxId:d90b496120c69d3d051d8e1608241ad24a491520f40b39f030fe085ba65078e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724071251148429212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d77d11ed2ed239056e8d2381737b2f837bafc1374a232f84fc945cb17f7cf34,PodSandboxId:7e51c80fc0d735dac0215f6993cd411e6ef2c37d4fc7fda7b4b641f441ae27a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724071251169075087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a
125261250c55,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aa9dce507ce827b16bdfa814e609b17fbcb211c68cedcb54c55ed58ba3c32f8,PodSandboxId:222b28c86955163db5fb227b33f303ab1c6490380342ae86a5a8a333105b1980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724071251149722561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1e3f91d8aef93204d65740971434e447791014a0c8ceb97b13cdd87bb6a81a,PodSandboxId:33f130edf5fccdd8125b2bb222b273b8ab5e225a859ff6cf5d7fb6e863c25a54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724071251111750204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=593ca33a-706f-4939-a74e-21a2cd5e335d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.284628171Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24a7a68e-4063-4e98-b6df-5c0048c47d67 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.284769402Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24a7a68e-4063-4e98-b6df-5c0048c47d67 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.285922177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0062c626-e553-4008-bdef-26e42d0874cd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.286492417Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071749286472386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0062c626-e553-4008-bdef-26e42d0874cd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.287325815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d9ccce5-3e5e-4024-a5e7-da848e09a9eb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.287379693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d9ccce5-3e5e-4024-a5e7-da848e09a9eb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.287724445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:496c26d69901995c5e2231b98d05573ce313f39c875a7d294400ba35cfb2fc1f,PodSandboxId:8609851bb92819a66bcdbfbfccf4be489e7ea39c976d94a39a0898eb5f59225b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724071684501085758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f25641c5dfdeaca4e6af72814159890c9918206022279e7ee42004d2b026e8,PodSandboxId:92c9401559c45a85af60134f10d5cb479a3f10a7b1f46c560e91fb0b122aeb20,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724071651004730213,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ac994ca741441c4d9708cfa516e5f9af77a574bff2354d97fa3a811fbd805,PodSandboxId:d5f63d9663d016557990bfae142e1c40cc468e4b03cb7d7f76023b3489a8d61a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724071650827599812,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f91dcff6705ba3104b83628107b85b8fb0f68c565606cad42a7c8558444137b,PodSandboxId:dfec30fb619178aa1b8c5481ec83faf27aa1f02028a3216c2b906ef09e532f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724071650727807324,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f45a9788fa14f30429a3871fb5ff01c1f5b1850306cf3134ab9ec548d1f82f7c,PodSandboxId:90aea072eba447fc94ff5e110bf586fb063492754fb27b30a599f744c3c6b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724071650732810432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843131df4f0f074845c8458a5446a2d1c2104bae712abe9217b607bfc2b6502b,PodSandboxId:dfbe5dea075fc8cb7b86d66caa2911044f9b4aed80db2375365e8e718fcd6b20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724071646976303352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa230465a97a678916003722e53dd91f2f5e69d09ed2e1c5865d0e01e59da5ab,PodSandboxId:5547b42c380c3a66a04e3f38cc236430062a78c953b578cafd86e8f4ab0c9c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724071646944390716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8764e282807eac82774573f72ca4f5fe102a3f50de20300fde498db65f12945,PodSandboxId:958b3261f0964a2f792d05f976feffb383e3f7287ea314c8b44d385f622de8c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724071646940441880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a125261250c55,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57507b47037f0eb0bc4b3eff7191a5b232c1280600d3c7530d4478d3f586579c,PodSandboxId:2c552218a9055f5306794c85ef2e759c110b19823fa311e0375b517a345e9292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724071646854567972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def9c00aa12ed253534a300d71a36ada81123c6e70d89473e9fafca93b4b4170,PodSandboxId:657161da2edeb8fc288056ecddc3f84358aed7ab29d98300c15f714d982aa557,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724071328984594419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef89dd32791ace6948049465b6fa9ff69f297da02f0e57ab2749972773f974b1,PodSandboxId:0ca70924c14b8fb350aee188c1f707c96f8b17fbab6f257069b83ce995bcfef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724071275922279501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79879fc5f7a855a646ccd2395da2bb8633972d08001e288a10f55de9441370e,PodSandboxId:efacb57a7054f8e2e52e801e90833527dd047852ea29122eab8df71d466c2d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724071275881188549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7375e5f48c3e1922e01c6a92d400b27c989817d0fa0650730ab31b9b0649ca60,PodSandboxId:ccc35bddd76b97f0ec29b1d86b06c4bacc3e9e4d26fc455f583a9830be069fa1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724071263954702806,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7793674a49e92acea9566bbb2e201eee3db34fe55161fcc7f16361b34bb892f2,PodSandboxId:856dfb5095308d115b49a33477d891d338582db4a3c83f112507a432cdc8164c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724071263835767303,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa63e92b723d8c7ce76bde6151410872c79cc083284faeece6386610063d5c5,PodSandboxId:d90b496120c69d3d051d8e1608241ad24a491520f40b39f030fe085ba65078e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724071251148429212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d77d11ed2ed239056e8d2381737b2f837bafc1374a232f84fc945cb17f7cf34,PodSandboxId:7e51c80fc0d735dac0215f6993cd411e6ef2c37d4fc7fda7b4b641f441ae27a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724071251169075087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a
125261250c55,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aa9dce507ce827b16bdfa814e609b17fbcb211c68cedcb54c55ed58ba3c32f8,PodSandboxId:222b28c86955163db5fb227b33f303ab1c6490380342ae86a5a8a333105b1980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724071251149722561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1e3f91d8aef93204d65740971434e447791014a0c8ceb97b13cdd87bb6a81a,PodSandboxId:33f130edf5fccdd8125b2bb222b273b8ab5e225a859ff6cf5d7fb6e863c25a54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724071251111750204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d9ccce5-3e5e-4024-a5e7-da848e09a9eb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.301730324Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=6fc166bb-e4ce-401e-b921-ecd743dbe556 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.302081408Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8609851bb92819a66bcdbfbfccf4be489e7ea39c976d94a39a0898eb5f59225b,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-czkl5,Uid:77b4d7c5-4601-449e-943a-4bf0d53eb5c8,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724071684373824726,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T12:47:30.175252887Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d5f63d9663d016557990bfae142e1c40cc468e4b03cb7d7f76023b3489a8d61a,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-nnmfg,Uid:2b25bb77-4aed-4c5e-a243-af62da06a2de,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1724071650574054024,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T12:47:30.175259187Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dfec30fb619178aa1b8c5481ec83faf27aa1f02028a3216c2b906ef09e532f1e,Metadata:&PodSandboxMetadata{Name:kube-proxy-fwckd,Uid:8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724071650525678717,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{
kubernetes.io/config.seen: 2024-08-19T12:47:30.175248748Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:92c9401559c45a85af60134f10d5cb479a3f10a7b1f46c560e91fb0b122aeb20,Metadata:&PodSandboxMetadata{Name:kindnet-pc42l,Uid:039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724071650510159381,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T12:47:30.175255567Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:90aea072eba447fc94ff5e110bf586fb063492754fb27b30a599f744c3c6b7af,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8494c463-5cd5-4267-a1ae-177810040a05,Namespace:kube-system,Attempt:1,},State
:SANDBOX_READY,CreatedAt:1724071650491729194,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp
\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-19T12:47:30.175251680Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dfbe5dea075fc8cb7b86d66caa2911044f9b4aed80db2375365e8e718fcd6b20,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-172739,Uid:c03c91913e812d97f83650df33a579b1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724071646702764079,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c03c91913e812d97f83650df33a579b1,kubernetes.io/config.seen: 2024-08-19T12:47:26.180900833Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c552218a9055f5306794c85ef2e759c110b19823fa311e0375b517a345e9292,Metadata:&PodSandboxMetadata{Name:etcd-multinode-17273
9,Uid:a8f95ba2d20ab2a954f95d491e841ccc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724071646701682578,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.191:2379,kubernetes.io/config.hash: a8f95ba2d20ab2a954f95d491e841ccc,kubernetes.io/config.seen: 2024-08-19T12:47:26.180895211Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:958b3261f0964a2f792d05f976feffb383e3f7287ea314c8b44d385f622de8c5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-172739,Uid:76d876c9452dc834909a125261250c55,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724071646697938881,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernet
es.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a125261250c55,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 76d876c9452dc834909a125261250c55,kubernetes.io/config.seen: 2024-08-19T12:47:26.180899709Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5547b42c380c3a66a04e3f38cc236430062a78c953b578cafd86e8f4ab0c9c84,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-172739,Uid:31ea879b7d0ba1998b615821db262369,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724071646686191963,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.191:8443,kuberne
tes.io/config.hash: 31ea879b7d0ba1998b615821db262369,kubernetes.io/config.seen: 2024-08-19T12:47:26.180898611Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:657161da2edeb8fc288056ecddc3f84358aed7ab29d98300c15f714d982aa557,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-czkl5,Uid:77b4d7c5-4601-449e-943a-4bf0d53eb5c8,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724071328005436462,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T12:42:06.794538892Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:efacb57a7054f8e2e52e801e90833527dd047852ea29122eab8df71d466c2d8d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8494c463-5cd5-4267-a1ae-177810040a05,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1724071275719605791,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-19T12:41:15.409828893Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0ca70924c14b8fb350aee188c1f707c96f8b17fbab6f257069b83ce995bcfef8,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-nnmfg,Uid:2b25bb77-4aed-4c5e-a243-af62da06a2de,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724071275717683537,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T12:41:15.406997338Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:856dfb5095308d115b49a33477d891d338582db4a3c83f112507a432cdc8164c,Metadata:&PodSandboxMetadata{Name:kube-proxy-fwckd,Uid:8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,Namespace:kube-
system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724071263206784997,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T12:41:01.398396996Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ccc35bddd76b97f0ec29b1d86b06c4bacc3e9e4d26fc455f583a9830be069fa1,Metadata:&PodSandboxMetadata{Name:kindnet-pc42l,Uid:039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724071261712676318,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,k8s-app: kindnet,pod
-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T12:41:01.392208751Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33f130edf5fccdd8125b2bb222b273b8ab5e225a859ff6cf5d7fb6e863c25a54,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-172739,Uid:c03c91913e812d97f83650df33a579b1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724071250875847066,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c03c91913e812d97f83650df33a579b1,kubernetes.io/config.seen: 2024-08-19T12:40:50.402148533Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:222b28c86955163db5fb227b33f303ab1c6490380342ae86a5a8a333105b1980,Metadata:&PodSandboxMetadata{Name:
kube-apiserver-multinode-172739,Uid:31ea879b7d0ba1998b615821db262369,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724071250871909703,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.191:8443,kubernetes.io/config.hash: 31ea879b7d0ba1998b615821db262369,kubernetes.io/config.seen: 2024-08-19T12:40:50.402145682Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d90b496120c69d3d051d8e1608241ad24a491520f40b39f030fe085ba65078e7,Metadata:&PodSandboxMetadata{Name:etcd-multinode-172739,Uid:a8f95ba2d20ab2a954f95d491e841ccc,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724071250871208378,Labels:map[string]string{component: etcd,io.kubernetes.contai
ner.name: POD,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.191:2379,kubernetes.io/config.hash: a8f95ba2d20ab2a954f95d491e841ccc,kubernetes.io/config.seen: 2024-08-19T12:40:50.402090636Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7e51c80fc0d735dac0215f6993cd411e6ef2c37d4fc7fda7b4b641f441ae27a2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-172739,Uid:76d876c9452dc834909a125261250c55,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724071250860292953,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a125261250c55,tier: control-plane,},Annotati
ons:map[string]string{kubernetes.io/config.hash: 76d876c9452dc834909a125261250c55,kubernetes.io/config.seen: 2024-08-19T12:40:50.402147657Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6fc166bb-e4ce-401e-b921-ecd743dbe556 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.303244306Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90ec9bcf-23a0-4a67-adb8-2b75b38b21fc name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.303383294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90ec9bcf-23a0-4a67-adb8-2b75b38b21fc name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:49:09 multinode-172739 crio[2750]: time="2024-08-19 12:49:09.303709013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:496c26d69901995c5e2231b98d05573ce313f39c875a7d294400ba35cfb2fc1f,PodSandboxId:8609851bb92819a66bcdbfbfccf4be489e7ea39c976d94a39a0898eb5f59225b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724071684501085758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f25641c5dfdeaca4e6af72814159890c9918206022279e7ee42004d2b026e8,PodSandboxId:92c9401559c45a85af60134f10d5cb479a3f10a7b1f46c560e91fb0b122aeb20,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724071651004730213,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ac994ca741441c4d9708cfa516e5f9af77a574bff2354d97fa3a811fbd805,PodSandboxId:d5f63d9663d016557990bfae142e1c40cc468e4b03cb7d7f76023b3489a8d61a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724071650827599812,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f91dcff6705ba3104b83628107b85b8fb0f68c565606cad42a7c8558444137b,PodSandboxId:dfec30fb619178aa1b8c5481ec83faf27aa1f02028a3216c2b906ef09e532f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724071650727807324,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f45a9788fa14f30429a3871fb5ff01c1f5b1850306cf3134ab9ec548d1f82f7c,PodSandboxId:90aea072eba447fc94ff5e110bf586fb063492754fb27b30a599f744c3c6b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724071650732810432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843131df4f0f074845c8458a5446a2d1c2104bae712abe9217b607bfc2b6502b,PodSandboxId:dfbe5dea075fc8cb7b86d66caa2911044f9b4aed80db2375365e8e718fcd6b20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724071646976303352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa230465a97a678916003722e53dd91f2f5e69d09ed2e1c5865d0e01e59da5ab,PodSandboxId:5547b42c380c3a66a04e3f38cc236430062a78c953b578cafd86e8f4ab0c9c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724071646944390716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8764e282807eac82774573f72ca4f5fe102a3f50de20300fde498db65f12945,PodSandboxId:958b3261f0964a2f792d05f976feffb383e3f7287ea314c8b44d385f622de8c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724071646940441880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a125261250c55,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57507b47037f0eb0bc4b3eff7191a5b232c1280600d3c7530d4478d3f586579c,PodSandboxId:2c552218a9055f5306794c85ef2e759c110b19823fa311e0375b517a345e9292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724071646854567972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def9c00aa12ed253534a300d71a36ada81123c6e70d89473e9fafca93b4b4170,PodSandboxId:657161da2edeb8fc288056ecddc3f84358aed7ab29d98300c15f714d982aa557,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724071328984594419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef89dd32791ace6948049465b6fa9ff69f297da02f0e57ab2749972773f974b1,PodSandboxId:0ca70924c14b8fb350aee188c1f707c96f8b17fbab6f257069b83ce995bcfef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724071275922279501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79879fc5f7a855a646ccd2395da2bb8633972d08001e288a10f55de9441370e,PodSandboxId:efacb57a7054f8e2e52e801e90833527dd047852ea29122eab8df71d466c2d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724071275881188549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7375e5f48c3e1922e01c6a92d400b27c989817d0fa0650730ab31b9b0649ca60,PodSandboxId:ccc35bddd76b97f0ec29b1d86b06c4bacc3e9e4d26fc455f583a9830be069fa1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724071263954702806,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7793674a49e92acea9566bbb2e201eee3db34fe55161fcc7f16361b34bb892f2,PodSandboxId:856dfb5095308d115b49a33477d891d338582db4a3c83f112507a432cdc8164c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724071263835767303,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa63e92b723d8c7ce76bde6151410872c79cc083284faeece6386610063d5c5,PodSandboxId:d90b496120c69d3d051d8e1608241ad24a491520f40b39f030fe085ba65078e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724071251148429212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d77d11ed2ed239056e8d2381737b2f837bafc1374a232f84fc945cb17f7cf34,PodSandboxId:7e51c80fc0d735dac0215f6993cd411e6ef2c37d4fc7fda7b4b641f441ae27a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724071251169075087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a
125261250c55,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aa9dce507ce827b16bdfa814e609b17fbcb211c68cedcb54c55ed58ba3c32f8,PodSandboxId:222b28c86955163db5fb227b33f303ab1c6490380342ae86a5a8a333105b1980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724071251149722561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1e3f91d8aef93204d65740971434e447791014a0c8ceb97b13cdd87bb6a81a,PodSandboxId:33f130edf5fccdd8125b2bb222b273b8ab5e225a859ff6cf5d7fb6e863c25a54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724071251111750204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90ec9bcf-23a0-4a67-adb8-2b75b38b21fc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	496c26d699019       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   8609851bb9281       busybox-7dff88458-czkl5
	35f25641c5dfd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   92c9401559c45       kindnet-pc42l
	543ac994ca741       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   d5f63d9663d01       coredns-6f6b679f8f-nnmfg
	f45a9788fa14f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   90aea072eba44       storage-provisioner
	9f91dcff6705b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   dfec30fb61917       kube-proxy-fwckd
	843131df4f0f0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   dfbe5dea075fc       kube-scheduler-multinode-172739
	aa230465a97a6       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   5547b42c380c3       kube-apiserver-multinode-172739
	e8764e282807e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   958b3261f0964       kube-controller-manager-multinode-172739
	57507b47037f0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   2c552218a9055       etcd-multinode-172739
	def9c00aa12ed       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   657161da2edeb       busybox-7dff88458-czkl5
	ef89dd32791ac       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   0ca70924c14b8       coredns-6f6b679f8f-nnmfg
	e79879fc5f7a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   efacb57a7054f       storage-provisioner
	7375e5f48c3e1       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   ccc35bddd76b9       kindnet-pc42l
	7793674a49e92       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   856dfb5095308       kube-proxy-fwckd
	7d77d11ed2ed2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   7e51c80fc0d73       kube-controller-manager-multinode-172739
	0aa9dce507ce8       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   222b28c869551       kube-apiserver-multinode-172739
	1aa63e92b723d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   d90b496120c69       etcd-multinode-172739
	0b1e3f91d8aef       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   33f130edf5fcc       kube-scheduler-multinode-172739
	
	
	==> coredns [543ac994ca741441c4d9708cfa516e5f9af77a574bff2354d97fa3a811fbd805] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55349 - 62834 "HINFO IN 6632884223335137357.1338848667664456225. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01124799s
	
	
	==> coredns [ef89dd32791ace6948049465b6fa9ff69f297da02f0e57ab2749972773f974b1] <==
	[INFO] 10.244.1.2:59315 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001647449s
	[INFO] 10.244.1.2:55593 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116989s
	[INFO] 10.244.1.2:56425 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081028s
	[INFO] 10.244.1.2:52745 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001198701s
	[INFO] 10.244.1.2:53720 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006822s
	[INFO] 10.244.1.2:49332 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086004s
	[INFO] 10.244.1.2:59109 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067783s
	[INFO] 10.244.0.3:58222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112986s
	[INFO] 10.244.0.3:33297 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107664s
	[INFO] 10.244.0.3:49911 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048819s
	[INFO] 10.244.0.3:43193 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064409s
	[INFO] 10.244.1.2:45863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178747s
	[INFO] 10.244.1.2:59721 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154674s
	[INFO] 10.244.1.2:37933 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113763s
	[INFO] 10.244.1.2:52137 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133666s
	[INFO] 10.244.0.3:48915 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118063s
	[INFO] 10.244.0.3:43620 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00015968s
	[INFO] 10.244.0.3:50159 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094508s
	[INFO] 10.244.0.3:51565 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000067737s
	[INFO] 10.244.1.2:51214 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129447s
	[INFO] 10.244.1.2:47491 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001125s
	[INFO] 10.244.1.2:56930 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000066769s
	[INFO] 10.244.1.2:48361 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077915s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-172739
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-172739
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=multinode-172739
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T12_40_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:40:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-172739
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:49:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:47:30 +0000   Mon, 19 Aug 2024 12:40:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:47:30 +0000   Mon, 19 Aug 2024 12:40:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:47:30 +0000   Mon, 19 Aug 2024 12:40:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:47:30 +0000   Mon, 19 Aug 2024 12:41:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    multinode-172739
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3eef8608da994e1295670ed1689c13f1
	  System UUID:                3eef8608-da99-4e12-9567-0ed1689c13f1
	  Boot ID:                    4efaa1e9-af67-4dcb-9d07-acc7c11cdfca
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-czkl5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m3s
	  kube-system                 coredns-6f6b679f8f-nnmfg                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m8s
	  kube-system                 etcd-multinode-172739                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m13s
	  kube-system                 kindnet-pc42l                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m8s
	  kube-system                 kube-apiserver-multinode-172739             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 kube-controller-manager-multinode-172739    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 kube-proxy-fwckd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-scheduler-multinode-172739             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m5s                 kube-proxy       
	  Normal  Starting                 98s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m13s                kubelet          Node multinode-172739 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m13s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m13s                kubelet          Node multinode-172739 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m13s                kubelet          Node multinode-172739 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m13s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m9s                 node-controller  Node multinode-172739 event: Registered Node multinode-172739 in Controller
	  Normal  NodeReady                7m54s                kubelet          Node multinode-172739 status is now: NodeReady
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)  kubelet          Node multinode-172739 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet          Node multinode-172739 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)  kubelet          Node multinode-172739 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           96s                  node-controller  Node multinode-172739 event: Registered Node multinode-172739 in Controller
	
	
	Name:               multinode-172739-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-172739-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=multinode-172739
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_48_13_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:48:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-172739-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:49:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:48:43 +0000   Mon, 19 Aug 2024 12:48:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:48:43 +0000   Mon, 19 Aug 2024 12:48:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:48:43 +0000   Mon, 19 Aug 2024 12:48:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:48:43 +0000   Mon, 19 Aug 2024 12:48:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    multinode-172739-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5e74e0080be4d24abf93f8d5486aaef
	  System UUID:                c5e74e00-80be-4d24-abf9-3f8d5486aaef
	  Boot ID:                    6fa2cf42-2cb3-477a-b810-4dac77700fa0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ks8nn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kindnet-vxlzj              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m23s
	  kube-system                 kube-proxy-n6npn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m18s                  kube-proxy  
	  Normal  Starting                 53s                    kube-proxy  
	  Normal  NodeAllocatableEnforced  7m24s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m23s (x2 over 7m24s)  kubelet     Node multinode-172739-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m23s (x2 over 7m24s)  kubelet     Node multinode-172739-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m23s (x2 over 7m24s)  kubelet     Node multinode-172739-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m5s                   kubelet     Node multinode-172739-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  57s (x2 over 57s)      kubelet     Node multinode-172739-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x2 over 57s)      kubelet     Node multinode-172739-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x2 over 57s)      kubelet     Node multinode-172739-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  57s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                40s                    kubelet     Node multinode-172739-m02 status is now: NodeReady
	
	
	Name:               multinode-172739-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-172739-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=multinode-172739
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_48_49_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:48:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-172739-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:49:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:49:06 +0000   Mon, 19 Aug 2024 12:48:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:49:06 +0000   Mon, 19 Aug 2024 12:48:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:49:06 +0000   Mon, 19 Aug 2024 12:48:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:49:06 +0000   Mon, 19 Aug 2024 12:49:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    multinode-172739-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b6b6abe0743c4ecea392318bbeb3828e
	  System UUID:                b6b6abe0-743c-4ece-a392-318bbeb3828e
	  Boot ID:                    046de400-b73b-46c0-a118-2bf5932cc30d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-x5kdg       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m30s
	  kube-system                 kube-proxy-bxt54    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m37s                  kube-proxy       
	  Normal  Starting                 6m25s                  kube-proxy       
	  Normal  Starting                 16s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m30s (x2 over 6m30s)  kubelet          Node multinode-172739-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s (x2 over 6m30s)  kubelet          Node multinode-172739-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s (x2 over 6m30s)  kubelet          Node multinode-172739-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m11s                  kubelet          Node multinode-172739-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m42s (x2 over 5m42s)  kubelet          Node multinode-172739-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m42s (x2 over 5m42s)  kubelet          Node multinode-172739-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m42s (x2 over 5m42s)  kubelet          Node multinode-172739-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m24s                  kubelet          Node multinode-172739-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)      kubelet          Node multinode-172739-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)      kubelet          Node multinode-172739-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)      kubelet          Node multinode-172739-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                    node-controller  Node multinode-172739-m03 event: Registered Node multinode-172739-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-172739-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.064117] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062025] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.186161] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.120603] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.261815] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.936231] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +3.811228] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.065522] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.008447] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[  +0.087891] kauditd_printk_skb: 69 callbacks suppressed
	[Aug19 12:41] systemd-fstab-generator[1337]: Ignoring "noauto" option for root device
	[  +0.091712] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.950459] kauditd_printk_skb: 69 callbacks suppressed
	[Aug19 12:42] kauditd_printk_skb: 12 callbacks suppressed
	[Aug19 12:47] systemd-fstab-generator[2668]: Ignoring "noauto" option for root device
	[  +0.143262] systemd-fstab-generator[2680]: Ignoring "noauto" option for root device
	[  +0.164081] systemd-fstab-generator[2694]: Ignoring "noauto" option for root device
	[  +0.132101] systemd-fstab-generator[2706]: Ignoring "noauto" option for root device
	[  +0.277336] systemd-fstab-generator[2734]: Ignoring "noauto" option for root device
	[  +0.703570] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +2.314565] systemd-fstab-generator[2960]: Ignoring "noauto" option for root device
	[  +4.695490] kauditd_printk_skb: 184 callbacks suppressed
	[ +17.204099] systemd-fstab-generator[3814]: Ignoring "noauto" option for root device
	[  +0.092148] kauditd_printk_skb: 34 callbacks suppressed
	[Aug19 12:48] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [1aa63e92b723d8c7ce76bde6151410872c79cc083284faeece6386610063d5c5] <==
	{"level":"info","ts":"2024-08-19T12:40:52.288346Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:40:52.290211Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"78cc5c67b96828b5","local-member-id":"f21a8e08563785d2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:40:52.290353Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:40:52.290406Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:40:52.290437Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:40:52.291302Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:40:52.293873Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.191:2379"}
	{"level":"info","ts":"2024-08-19T12:40:52.291647Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:40:52.296842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T12:40:52.291674Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T12:40:52.296988Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T12:41:10.694655Z","caller":"traceutil/trace.go:171","msg":"trace[999557054] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"124.245816ms","start":"2024-08-19T12:41:10.570385Z","end":"2024-08-19T12:41:10.694631Z","steps":["trace[999557054] 'process raft request'  (duration: 124.143859ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T12:41:53.292537Z","caller":"traceutil/trace.go:171","msg":"trace[384693846] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"133.15129ms","start":"2024-08-19T12:41:53.159364Z","end":"2024-08-19T12:41:53.292515Z","steps":["trace[384693846] 'process raft request'  (duration: 132.926691ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:42:39.494506Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.507186ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-172739-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T12:42:39.494738Z","caller":"traceutil/trace.go:171","msg":"trace[1924394683] range","detail":"{range_begin:/registry/minions/multinode-172739-m03; range_end:; response_count:0; response_revision:574; }","duration":"149.821887ms","start":"2024-08-19T12:42:39.344902Z","end":"2024-08-19T12:42:39.494724Z","steps":["trace[1924394683] 'range keys from in-memory index tree'  (duration: 149.447353ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T12:45:50.935209Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T12:45:50.935375Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-172739","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.191:2380"],"advertise-client-urls":["https://192.168.39.191:2379"]}
	{"level":"warn","ts":"2024-08-19T12:45:50.935465Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:45:50.935561Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:45:51.023615Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.191:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:45:51.023721Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.191:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T12:45:51.025167Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f21a8e08563785d2","current-leader-member-id":"f21a8e08563785d2"}
	{"level":"info","ts":"2024-08-19T12:45:51.028031Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.191:2380"}
	{"level":"info","ts":"2024-08-19T12:45:51.028263Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.191:2380"}
	{"level":"info","ts":"2024-08-19T12:45:51.028297Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-172739","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.191:2380"],"advertise-client-urls":["https://192.168.39.191:2379"]}
	
	
	==> etcd [57507b47037f0eb0bc4b3eff7191a5b232c1280600d3c7530d4478d3f586579c] <==
	{"level":"info","ts":"2024-08-19T12:47:27.267374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 switched to configuration voters=(17445412273030399442)"}
	{"level":"info","ts":"2024-08-19T12:47:27.289368Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"78cc5c67b96828b5","local-member-id":"f21a8e08563785d2","added-peer-id":"f21a8e08563785d2","added-peer-peer-urls":["https://192.168.39.191:2380"]}
	{"level":"info","ts":"2024-08-19T12:47:27.289483Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"78cc5c67b96828b5","local-member-id":"f21a8e08563785d2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:47:27.289513Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:47:27.299878Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T12:47:27.300140Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f21a8e08563785d2","initial-advertise-peer-urls":["https://192.168.39.191:2380"],"listen-peer-urls":["https://192.168.39.191:2380"],"advertise-client-urls":["https://192.168.39.191:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.191:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T12:47:27.300165Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T12:47:27.300224Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.191:2380"}
	{"level":"info","ts":"2024-08-19T12:47:27.300230Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.191:2380"}
	{"level":"info","ts":"2024-08-19T12:47:28.305179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T12:47:28.305285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T12:47:28.305346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 received MsgPreVoteResp from f21a8e08563785d2 at term 2"}
	{"level":"info","ts":"2024-08-19T12:47:28.305381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T12:47:28.305405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 received MsgVoteResp from f21a8e08563785d2 at term 3"}
	{"level":"info","ts":"2024-08-19T12:47:28.305432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T12:47:28.305457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f21a8e08563785d2 elected leader f21a8e08563785d2 at term 3"}
	{"level":"info","ts":"2024-08-19T12:47:28.310330Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f21a8e08563785d2","local-member-attributes":"{Name:multinode-172739 ClientURLs:[https://192.168.39.191:2379]}","request-path":"/0/members/f21a8e08563785d2/attributes","cluster-id":"78cc5c67b96828b5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T12:47:28.310640Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:47:28.311062Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:47:28.315009Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:47:28.329976Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.191:2379"}
	{"level":"info","ts":"2024-08-19T12:47:28.330541Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:47:28.315137Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T12:47:28.331183Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T12:47:28.343359Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:49:09 up 8 min,  0 users,  load average: 0.15, 0.19, 0.11
	Linux multinode-172739 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [35f25641c5dfdeaca4e6af72814159890c9918206022279e7ee42004d2b026e8] <==
	I0819 12:48:21.964258       1 main.go:322] Node multinode-172739-m03 has CIDR [10.244.3.0/24] 
	I0819 12:48:31.956806       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:48:31.956966       1 main.go:299] handling current node
	I0819 12:48:31.957003       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:48:31.957022       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:48:31.957288       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0819 12:48:31.957325       1 main.go:322] Node multinode-172739-m03 has CIDR [10.244.3.0/24] 
	I0819 12:48:41.964352       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0819 12:48:41.964414       1 main.go:322] Node multinode-172739-m03 has CIDR [10.244.3.0/24] 
	I0819 12:48:41.964605       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:48:41.964632       1 main.go:299] handling current node
	I0819 12:48:41.964647       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:48:41.964652       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:48:51.956180       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:48:51.956239       1 main.go:299] handling current node
	I0819 12:48:51.956257       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:48:51.956267       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:48:51.956485       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0819 12:48:51.956515       1 main.go:322] Node multinode-172739-m03 has CIDR [10.244.2.0/24] 
	I0819 12:49:01.957913       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0819 12:49:01.957964       1 main.go:322] Node multinode-172739-m03 has CIDR [10.244.2.0/24] 
	I0819 12:49:01.958167       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:49:01.958194       1 main.go:299] handling current node
	I0819 12:49:01.958207       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:49:01.958212       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [7375e5f48c3e1922e01c6a92d400b27c989817d0fa0650730ab31b9b0649ca60] <==
	I0819 12:45:05.050012       1 main.go:322] Node multinode-172739-m03 has CIDR [10.244.3.0/24] 
	I0819 12:45:15.058296       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:45:15.058437       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:45:15.058603       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0819 12:45:15.058627       1 main.go:322] Node multinode-172739-m03 has CIDR [10.244.3.0/24] 
	I0819 12:45:15.058718       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:45:15.058738       1 main.go:299] handling current node
	I0819 12:45:25.058485       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:45:25.058667       1 main.go:299] handling current node
	I0819 12:45:25.058719       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:45:25.058740       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:45:25.058916       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0819 12:45:25.058939       1 main.go:322] Node multinode-172739-m03 has CIDR [10.244.3.0/24] 
	I0819 12:45:35.056477       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:45:35.056586       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:45:35.056746       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0819 12:45:35.056768       1 main.go:322] Node multinode-172739-m03 has CIDR [10.244.3.0/24] 
	I0819 12:45:35.056844       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:45:35.056876       1 main.go:299] handling current node
	I0819 12:45:45.055090       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:45:45.055185       1 main.go:299] handling current node
	I0819 12:45:45.055220       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:45:45.055228       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:45:45.055449       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0819 12:45:45.055474       1 main.go:322] Node multinode-172739-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0aa9dce507ce827b16bdfa814e609b17fbcb211c68cedcb54c55ed58ba3c32f8] <==
	I0819 12:40:55.196917       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 12:40:55.294549       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0819 12:40:55.302367       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.191]
	I0819 12:40:55.303281       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 12:40:55.310973       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 12:40:55.654242       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 12:40:56.354515       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 12:40:56.385328       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 12:40:56.406435       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 12:41:01.256438       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0819 12:41:01.361234       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0819 12:42:10.050557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51218: use of closed network connection
	E0819 12:42:10.238795       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51244: use of closed network connection
	E0819 12:42:10.434055       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51262: use of closed network connection
	E0819 12:42:10.612036       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51284: use of closed network connection
	E0819 12:42:10.785470       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51300: use of closed network connection
	E0819 12:42:10.951337       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51320: use of closed network connection
	E0819 12:42:11.232000       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51350: use of closed network connection
	E0819 12:42:11.407675       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51368: use of closed network connection
	E0819 12:42:11.572397       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51388: use of closed network connection
	E0819 12:42:11.748559       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:55608: use of closed network connection
	http2: server: error reading preface from client 192.168.39.217:48318: read tcp 192.168.39.191:8443->192.168.39.217:48318: read: connection reset by peer
	I0819 12:45:50.922428       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0819 12:45:50.952639       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0819 12:45:50.954738       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-apiserver [aa230465a97a678916003722e53dd91f2f5e69d09ed2e1c5865d0e01e59da5ab] <==
	I0819 12:47:29.948818       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 12:47:29.949687       1 policy_source.go:224] refreshing policies
	I0819 12:47:29.958341       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 12:47:30.007882       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 12:47:30.011740       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 12:47:30.011803       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 12:47:30.012405       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 12:47:30.014017       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 12:47:30.014428       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 12:47:30.014468       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 12:47:30.021662       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 12:47:30.021684       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 12:47:30.028561       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 12:47:30.028588       1 aggregator.go:171] initial CRD sync complete...
	I0819 12:47:30.028605       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 12:47:30.028610       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 12:47:30.028614       1 cache.go:39] Caches are synced for autoregister controller
	I0819 12:47:30.815928       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 12:47:32.139830       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 12:47:32.270995       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 12:47:32.287725       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 12:47:32.359783       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 12:47:32.366582       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 12:47:33.290669       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 12:47:33.590001       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7d77d11ed2ed239056e8d2381737b2f837bafc1374a232f84fc945cb17f7cf34] <==
	I0819 12:43:26.541083       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-172739-m02"
	I0819 12:43:26.542350       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:27.710676       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-172739-m03\" does not exist"
	I0819 12:43:27.710917       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-172739-m02"
	I0819 12:43:27.721839       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-172739-m03" podCIDRs=["10.244.3.0/24"]
	I0819 12:43:27.721878       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:27.721902       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:27.737214       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:28.067391       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:28.404825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:30.501937       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:37.931496       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:45.341833       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-172739-m03"
	I0819 12:43:45.342600       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:45.354416       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:45.448741       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:44:30.468039       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-172739-m02"
	I0819 12:44:30.468430       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:44:30.471564       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m02"
	I0819 12:44:30.494986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:44:30.502542       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m02"
	I0819 12:44:30.547840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.131036ms"
	I0819 12:44:30.547931       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.165µs"
	I0819 12:44:35.627267       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m02"
	I0819 12:44:45.705478       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	
	
	==> kube-controller-manager [e8764e282807eac82774573f72ca4f5fe102a3f50de20300fde498db65f12945] <==
	I0819 12:48:29.344006       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m02"
	I0819 12:48:29.352191       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.745µs"
	I0819 12:48:29.367499       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="79.939µs"
	I0819 12:48:31.156312       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.108811ms"
	I0819 12:48:31.156396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.172µs"
	I0819 12:48:33.476857       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m02"
	I0819 12:48:43.793318       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m02"
	I0819 12:48:47.332051       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:48:47.352747       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:48:47.567384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:48:47.567468       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-172739-m02"
	I0819 12:48:48.777081       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-172739-m02"
	I0819 12:48:48.777788       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-172739-m03\" does not exist"
	I0819 12:48:48.792544       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-172739-m03" podCIDRs=["10.244.2.0/24"]
	I0819 12:48:48.792587       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:48:48.792612       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:48:48.797410       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:48:49.088226       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:48:49.428585       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:48:53.582752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:48:59.182243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:49:06.432629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:49:06.432739       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-172739-m02"
	I0819 12:49:06.451521       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:49:08.498182       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	
	
	==> kube-proxy [7793674a49e92acea9566bbb2e201eee3db34fe55161fcc7f16361b34bb892f2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:41:04.067387       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 12:41:04.084747       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.191"]
	E0819 12:41:04.084834       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:41:04.124609       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 12:41:04.124716       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 12:41:04.124757       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:41:04.127379       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:41:04.127703       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:41:04.127733       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:41:04.130522       1 config.go:197] "Starting service config controller"
	I0819 12:41:04.130554       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:41:04.130576       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:41:04.130580       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:41:04.133826       1 config.go:326] "Starting node config controller"
	I0819 12:41:04.133874       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:41:04.231478       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 12:41:04.231640       1 shared_informer.go:320] Caches are synced for service config
	I0819 12:41:04.234692       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9f91dcff6705ba3104b83628107b85b8fb0f68c565606cad42a7c8558444137b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:47:31.132224       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 12:47:31.141520       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.191"]
	E0819 12:47:31.141602       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:47:31.231956       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 12:47:31.232003       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 12:47:31.232032       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:47:31.234594       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:47:31.234835       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:47:31.234867       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:47:31.236734       1 config.go:197] "Starting service config controller"
	I0819 12:47:31.236777       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:47:31.236798       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:47:31.236802       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:47:31.237437       1 config.go:326] "Starting node config controller"
	I0819 12:47:31.237467       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:47:31.337182       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 12:47:31.337235       1 shared_informer.go:320] Caches are synced for service config
	I0819 12:47:31.337570       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0b1e3f91d8aef93204d65740971434e447791014a0c8ceb97b13cdd87bb6a81a] <==
	W0819 12:40:53.703997       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 12:40:53.704030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:53.704215       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 12:40:53.704304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:53.704420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 12:40:53.704502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:53.705822       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 12:40:53.707215       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 12:40:54.518368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 12:40:54.518404       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:54.605225       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 12:40:54.605277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:54.620909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 12:40:54.621211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:54.633011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 12:40:54.633167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:54.637083       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 12:40:54.637212       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:54.934722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 12:40:54.935273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:55.080419       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 12:40:55.080602       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 12:40:57.568982       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 12:45:50.932772       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0819 12:45:50.935537       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [843131df4f0f074845c8458a5446a2d1c2104bae712abe9217b607bfc2b6502b] <==
	I0819 12:47:27.721234       1 serving.go:386] Generated self-signed cert in-memory
	W0819 12:47:29.928748       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 12:47:29.928797       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 12:47:29.928809       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 12:47:29.928822       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 12:47:29.972339       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 12:47:29.972403       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:47:29.974562       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 12:47:29.974755       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 12:47:29.974802       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 12:47:29.974826       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 12:47:30.075330       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 12:47:36 multinode-172739 kubelet[2967]: E0819 12:47:36.242924    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071656242618886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:47:36 multinode-172739 kubelet[2967]: E0819 12:47:36.242966    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071656242618886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:47:46 multinode-172739 kubelet[2967]: E0819 12:47:46.245310    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071666244958256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:47:46 multinode-172739 kubelet[2967]: E0819 12:47:46.245359    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071666244958256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:47:56 multinode-172739 kubelet[2967]: E0819 12:47:56.248228    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071676246796318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:47:56 multinode-172739 kubelet[2967]: E0819 12:47:56.249203    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071676246796318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:48:06 multinode-172739 kubelet[2967]: E0819 12:48:06.251086    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071686250826972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:48:06 multinode-172739 kubelet[2967]: E0819 12:48:06.251153    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071686250826972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:48:16 multinode-172739 kubelet[2967]: E0819 12:48:16.252898    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071696252577336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:48:16 multinode-172739 kubelet[2967]: E0819 12:48:16.252948    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071696252577336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:48:26 multinode-172739 kubelet[2967]: E0819 12:48:26.254929    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071706254391278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:48:26 multinode-172739 kubelet[2967]: E0819 12:48:26.254960    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071706254391278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:48:26 multinode-172739 kubelet[2967]: E0819 12:48:26.269564    2967 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 12:48:26 multinode-172739 kubelet[2967]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 12:48:26 multinode-172739 kubelet[2967]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 12:48:26 multinode-172739 kubelet[2967]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 12:48:26 multinode-172739 kubelet[2967]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 12:48:36 multinode-172739 kubelet[2967]: E0819 12:48:36.256548    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071716256181524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:48:36 multinode-172739 kubelet[2967]: E0819 12:48:36.256814    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071716256181524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:48:46 multinode-172739 kubelet[2967]: E0819 12:48:46.258860    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071726258480685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:48:46 multinode-172739 kubelet[2967]: E0819 12:48:46.258946    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071726258480685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:48:56 multinode-172739 kubelet[2967]: E0819 12:48:56.261068    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071736260722722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:48:56 multinode-172739 kubelet[2967]: E0819 12:48:56.261180    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071736260722722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:49:06 multinode-172739 kubelet[2967]: E0819 12:49:06.262341    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071746262015948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:49:06 multinode-172739 kubelet[2967]: E0819 12:49:06.262599    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071746262015948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 12:49:08.851928 2123739 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19479-2086935/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-172739 -n multinode-172739
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-172739 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (322.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 stop
E0819 12:49:29.735902 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-172739 stop: exit status 82 (2m0.48544889s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-172739-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-172739 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 status
E0819 12:51:26.668340 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-172739 status: exit status 3 (18.785347943s)

                                                
                                                
-- stdout --
	multinode-172739
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-172739-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 12:51:32.012527 2124406 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	E0819 12:51:32.012568 2124406 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-172739 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-172739 -n multinode-172739
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-172739 logs -n 25: (1.455256577s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-172739 cp multinode-172739-m02:/home/docker/cp-test.txt                       | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739:/home/docker/cp-test_multinode-172739-m02_multinode-172739.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n multinode-172739 sudo cat                                       | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | /home/docker/cp-test_multinode-172739-m02_multinode-172739.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-172739 cp multinode-172739-m02:/home/docker/cp-test.txt                       | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m03:/home/docker/cp-test_multinode-172739-m02_multinode-172739-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n multinode-172739-m03 sudo cat                                   | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | /home/docker/cp-test_multinode-172739-m02_multinode-172739-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-172739 cp testdata/cp-test.txt                                                | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-172739 cp multinode-172739-m03:/home/docker/cp-test.txt                       | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3931286913/001/cp-test_multinode-172739-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-172739 cp multinode-172739-m03:/home/docker/cp-test.txt                       | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739:/home/docker/cp-test_multinode-172739-m03_multinode-172739.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n multinode-172739 sudo cat                                       | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | /home/docker/cp-test_multinode-172739-m03_multinode-172739.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-172739 cp multinode-172739-m03:/home/docker/cp-test.txt                       | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m02:/home/docker/cp-test_multinode-172739-m03_multinode-172739-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n multinode-172739-m02 sudo cat                                   | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | /home/docker/cp-test_multinode-172739-m03_multinode-172739-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-172739 node stop m03                                                          | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	| node    | multinode-172739 node start                                                             | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-172739                                                                | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC |                     |
	| stop    | -p multinode-172739                                                                     | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC |                     |
	| start   | -p multinode-172739                                                                     | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:45 UTC | 19 Aug 24 12:49 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-172739                                                                | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:49 UTC |                     |
	| node    | multinode-172739 node delete                                                            | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:49 UTC | 19 Aug 24 12:49 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-172739 stop                                                                   | multinode-172739 | jenkins | v1.33.1 | 19 Aug 24 12:49 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:45:50
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:45:50.001597 2122632 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:45:50.001741 2122632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:45:50.001750 2122632 out.go:358] Setting ErrFile to fd 2...
	I0819 12:45:50.001754 2122632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:45:50.001910 2122632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:45:50.002450 2122632 out.go:352] Setting JSON to false
	I0819 12:45:50.003427 2122632 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":163700,"bootTime":1723907850,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:45:50.003488 2122632 start.go:139] virtualization: kvm guest
	I0819 12:45:50.005648 2122632 out.go:177] * [multinode-172739] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:45:50.006882 2122632 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 12:45:50.006952 2122632 notify.go:220] Checking for updates...
	I0819 12:45:50.009036 2122632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:45:50.010169 2122632 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:45:50.011192 2122632 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:45:50.012362 2122632 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:45:50.013614 2122632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:45:50.015255 2122632 config.go:182] Loaded profile config "multinode-172739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:45:50.015371 2122632 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:45:50.016047 2122632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:45:50.016133 2122632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:45:50.031486 2122632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I0819 12:45:50.031953 2122632 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:45:50.032588 2122632 main.go:141] libmachine: Using API Version  1
	I0819 12:45:50.032621 2122632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:45:50.032985 2122632 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:45:50.033207 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:45:50.068813 2122632 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 12:45:50.070187 2122632 start.go:297] selected driver: kvm2
	I0819 12:45:50.070210 2122632 start.go:901] validating driver "kvm2" against &{Name:multinode-172739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-172739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.217 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:45:50.070378 2122632 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:45:50.070730 2122632 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:45:50.070802 2122632 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 12:45:50.086239 2122632 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 12:45:50.086923 2122632 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:45:50.086962 2122632 cni.go:84] Creating CNI manager for ""
	I0819 12:45:50.086974 2122632 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 12:45:50.087060 2122632 start.go:340] cluster config:
	{Name:multinode-172739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-172739 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.217 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:45:50.087221 2122632 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:45:50.088854 2122632 out.go:177] * Starting "multinode-172739" primary control-plane node in "multinode-172739" cluster
	I0819 12:45:50.090015 2122632 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:45:50.090053 2122632 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 12:45:50.090064 2122632 cache.go:56] Caching tarball of preloaded images
	I0819 12:45:50.090136 2122632 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:45:50.090147 2122632 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 12:45:50.090267 2122632 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/config.json ...
	I0819 12:45:50.090529 2122632 start.go:360] acquireMachinesLock for multinode-172739: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:45:50.090573 2122632 start.go:364] duration metric: took 24.835µs to acquireMachinesLock for "multinode-172739"
	I0819 12:45:50.090593 2122632 start.go:96] Skipping create...Using existing machine configuration
	I0819 12:45:50.090605 2122632 fix.go:54] fixHost starting: 
	I0819 12:45:50.090897 2122632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:45:50.090934 2122632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:45:50.105574 2122632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0819 12:45:50.106089 2122632 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:45:50.106632 2122632 main.go:141] libmachine: Using API Version  1
	I0819 12:45:50.106653 2122632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:45:50.106980 2122632 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:45:50.107188 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:45:50.107347 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetState
	I0819 12:45:50.109369 2122632 fix.go:112] recreateIfNeeded on multinode-172739: state=Running err=<nil>
	W0819 12:45:50.109392 2122632 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 12:45:50.111116 2122632 out.go:177] * Updating the running kvm2 "multinode-172739" VM ...
	I0819 12:45:50.112056 2122632 machine.go:93] provisionDockerMachine start ...
	I0819 12:45:50.112077 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:45:50.112335 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:45:50.114549 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.114891 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:45:50.114913 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.115034 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:45:50.115205 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.115386 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.115502 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:45:50.115658 2122632 main.go:141] libmachine: Using SSH client type: native
	I0819 12:45:50.115871 2122632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0819 12:45:50.115884 2122632 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 12:45:50.225446 2122632 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-172739
	
	I0819 12:45:50.225478 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetMachineName
	I0819 12:45:50.225748 2122632 buildroot.go:166] provisioning hostname "multinode-172739"
	I0819 12:45:50.225782 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetMachineName
	I0819 12:45:50.225997 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:45:50.228754 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.229150 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:45:50.229196 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.229448 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:45:50.229642 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.229795 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.229951 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:45:50.230224 2122632 main.go:141] libmachine: Using SSH client type: native
	I0819 12:45:50.230431 2122632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0819 12:45:50.230452 2122632 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-172739 && echo "multinode-172739" | sudo tee /etc/hostname
	I0819 12:45:50.349216 2122632 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-172739
	
	I0819 12:45:50.349247 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:45:50.352415 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.352863 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:45:50.352887 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.353050 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:45:50.353248 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.353430 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.353562 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:45:50.353742 2122632 main.go:141] libmachine: Using SSH client type: native
	I0819 12:45:50.353913 2122632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0819 12:45:50.353929 2122632 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-172739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-172739/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-172739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:45:50.461245 2122632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:45:50.461279 2122632 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 12:45:50.461317 2122632 buildroot.go:174] setting up certificates
	I0819 12:45:50.461330 2122632 provision.go:84] configureAuth start
	I0819 12:45:50.461340 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetMachineName
	I0819 12:45:50.461628 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetIP
	I0819 12:45:50.464189 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.464578 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:45:50.464608 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.464750 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:45:50.467159 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.467501 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:45:50.467528 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.467673 2122632 provision.go:143] copyHostCerts
	I0819 12:45:50.467706 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:45:50.467740 2122632 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 12:45:50.467758 2122632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:45:50.467826 2122632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 12:45:50.467928 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:45:50.467948 2122632 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 12:45:50.467958 2122632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:45:50.467989 2122632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 12:45:50.468048 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:45:50.468065 2122632 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 12:45:50.468071 2122632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:45:50.468093 2122632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 12:45:50.468153 2122632 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.multinode-172739 san=[127.0.0.1 192.168.39.191 localhost minikube multinode-172739]
	I0819 12:45:50.615163 2122632 provision.go:177] copyRemoteCerts
	I0819 12:45:50.615233 2122632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:45:50.615258 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:45:50.617938 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.618298 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:45:50.618327 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.618516 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:45:50.618740 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.619007 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:45:50.619158 2122632 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/multinode-172739/id_rsa Username:docker}
	I0819 12:45:50.703502 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 12:45:50.703596 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 12:45:50.740499 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 12:45:50.740566 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0819 12:45:50.765460 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 12:45:50.765545 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 12:45:50.798434 2122632 provision.go:87] duration metric: took 337.091056ms to configureAuth
	I0819 12:45:50.798471 2122632 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:45:50.798706 2122632 config.go:182] Loaded profile config "multinode-172739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:45:50.798788 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:45:50.802228 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.802676 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:45:50.802706 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:45:50.802896 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:45:50.803166 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.803386 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:45:50.803551 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:45:50.803735 2122632 main.go:141] libmachine: Using SSH client type: native
	I0819 12:45:50.803941 2122632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0819 12:45:50.803959 2122632 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:47:21.593879 2122632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:47:21.593913 2122632 machine.go:96] duration metric: took 1m31.481841616s to provisionDockerMachine
	I0819 12:47:21.593928 2122632 start.go:293] postStartSetup for "multinode-172739" (driver="kvm2")
	I0819 12:47:21.593940 2122632 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:47:21.593958 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:47:21.594331 2122632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:47:21.594366 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:47:21.597719 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.598327 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:47:21.598360 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.598539 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:47:21.598768 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:47:21.598944 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:47:21.599095 2122632 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/multinode-172739/id_rsa Username:docker}
	I0819 12:47:21.680513 2122632 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:47:21.684593 2122632 command_runner.go:130] > NAME=Buildroot
	I0819 12:47:21.684616 2122632 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 12:47:21.684623 2122632 command_runner.go:130] > ID=buildroot
	I0819 12:47:21.684630 2122632 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 12:47:21.684637 2122632 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 12:47:21.684800 2122632 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:47:21.684833 2122632 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 12:47:21.684939 2122632 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 12:47:21.685026 2122632 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 12:47:21.685037 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /etc/ssl/certs/20941932.pem
	I0819 12:47:21.685121 2122632 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:47:21.695757 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:47:21.719170 2122632 start.go:296] duration metric: took 125.226186ms for postStartSetup
	I0819 12:47:21.719218 2122632 fix.go:56] duration metric: took 1m31.628613971s for fixHost
	I0819 12:47:21.719245 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:47:21.722386 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.722881 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:47:21.722912 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.723043 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:47:21.723244 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:47:21.723416 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:47:21.723553 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:47:21.723735 2122632 main.go:141] libmachine: Using SSH client type: native
	I0819 12:47:21.723980 2122632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0819 12:47:21.723995 2122632 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:47:21.825010 2122632 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724071641.805085157
	
	I0819 12:47:21.825039 2122632 fix.go:216] guest clock: 1724071641.805085157
	I0819 12:47:21.825047 2122632 fix.go:229] Guest: 2024-08-19 12:47:21.805085157 +0000 UTC Remote: 2024-08-19 12:47:21.719225922 +0000 UTC m=+91.754348207 (delta=85.859235ms)
	I0819 12:47:21.825091 2122632 fix.go:200] guest clock delta is within tolerance: 85.859235ms
	I0819 12:47:21.825099 2122632 start.go:83] releasing machines lock for "multinode-172739", held for 1m31.734514797s
	I0819 12:47:21.825122 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:47:21.825401 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetIP
	I0819 12:47:21.828219 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.828630 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:47:21.828673 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.828811 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:47:21.829408 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:47:21.829623 2122632 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:47:21.829695 2122632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:47:21.829757 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:47:21.829852 2122632 ssh_runner.go:195] Run: cat /version.json
	I0819 12:47:21.829865 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:47:21.832383 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.832521 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.832758 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:47:21.832785 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.832954 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:47:21.832957 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:47:21.832995 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:21.833116 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:47:21.833142 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:47:21.833249 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:47:21.833307 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:47:21.833409 2122632 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/multinode-172739/id_rsa Username:docker}
	I0819 12:47:21.833473 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:47:21.833621 2122632 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/multinode-172739/id_rsa Username:docker}
	I0819 12:47:21.908484 2122632 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 12:47:21.908671 2122632 ssh_runner.go:195] Run: systemctl --version
	I0819 12:47:21.931014 2122632 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 12:47:21.931060 2122632 command_runner.go:130] > systemd 252 (252)
	I0819 12:47:21.931079 2122632 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 12:47:21.931145 2122632 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:47:22.088151 2122632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 12:47:22.096456 2122632 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 12:47:22.096878 2122632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:47:22.096982 2122632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:47:22.106949 2122632 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 12:47:22.106980 2122632 start.go:495] detecting cgroup driver to use...
	I0819 12:47:22.107053 2122632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:47:22.124135 2122632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:47:22.139253 2122632 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:47:22.139320 2122632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:47:22.153294 2122632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:47:22.166354 2122632 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:47:22.308668 2122632 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:47:22.449169 2122632 docker.go:233] disabling docker service ...
	I0819 12:47:22.449253 2122632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:47:22.467248 2122632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:47:22.481074 2122632 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:47:22.616590 2122632 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:47:22.752967 2122632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:47:22.766734 2122632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:47:22.784816 2122632 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0819 12:47:22.785104 2122632 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:47:22.785161 2122632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:47:22.795546 2122632 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:47:22.795596 2122632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:47:22.805810 2122632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:47:22.815787 2122632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:47:22.826472 2122632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:47:22.837613 2122632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:47:22.848715 2122632 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:47:22.860363 2122632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:47:22.871612 2122632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:47:22.881670 2122632 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 12:47:22.881756 2122632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:47:22.891441 2122632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:47:23.026272 2122632 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:47:23.264119 2122632 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:47:23.264207 2122632 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:47:23.269094 2122632 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0819 12:47:23.269122 2122632 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0819 12:47:23.269132 2122632 command_runner.go:130] > Device: 0,22	Inode: 1341        Links: 1
	I0819 12:47:23.269142 2122632 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 12:47:23.269150 2122632 command_runner.go:130] > Access: 2024-08-19 12:47:23.140406004 +0000
	I0819 12:47:23.269171 2122632 command_runner.go:130] > Modify: 2024-08-19 12:47:23.140406004 +0000
	I0819 12:47:23.269184 2122632 command_runner.go:130] > Change: 2024-08-19 12:47:23.140406004 +0000
	I0819 12:47:23.269189 2122632 command_runner.go:130] >  Birth: -
	I0819 12:47:23.269212 2122632 start.go:563] Will wait 60s for crictl version
	I0819 12:47:23.269257 2122632 ssh_runner.go:195] Run: which crictl
	I0819 12:47:23.273080 2122632 command_runner.go:130] > /usr/bin/crictl
	I0819 12:47:23.273150 2122632 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:47:23.313101 2122632 command_runner.go:130] > Version:  0.1.0
	I0819 12:47:23.313126 2122632 command_runner.go:130] > RuntimeName:  cri-o
	I0819 12:47:23.313132 2122632 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0819 12:47:23.313138 2122632 command_runner.go:130] > RuntimeApiVersion:  v1
	I0819 12:47:23.314304 2122632 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:47:23.314393 2122632 ssh_runner.go:195] Run: crio --version
	I0819 12:47:23.342426 2122632 command_runner.go:130] > crio version 1.29.1
	I0819 12:47:23.342458 2122632 command_runner.go:130] > Version:        1.29.1
	I0819 12:47:23.342467 2122632 command_runner.go:130] > GitCommit:      unknown
	I0819 12:47:23.342474 2122632 command_runner.go:130] > GitCommitDate:  unknown
	I0819 12:47:23.342481 2122632 command_runner.go:130] > GitTreeState:   clean
	I0819 12:47:23.342488 2122632 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 12:47:23.342492 2122632 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 12:47:23.342496 2122632 command_runner.go:130] > Compiler:       gc
	I0819 12:47:23.342508 2122632 command_runner.go:130] > Platform:       linux/amd64
	I0819 12:47:23.342512 2122632 command_runner.go:130] > Linkmode:       dynamic
	I0819 12:47:23.342519 2122632 command_runner.go:130] > BuildTags:      
	I0819 12:47:23.342524 2122632 command_runner.go:130] >   containers_image_ostree_stub
	I0819 12:47:23.342531 2122632 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 12:47:23.342537 2122632 command_runner.go:130] >   btrfs_noversion
	I0819 12:47:23.342544 2122632 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 12:47:23.342552 2122632 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 12:47:23.342558 2122632 command_runner.go:130] >   seccomp
	I0819 12:47:23.342576 2122632 command_runner.go:130] > LDFlags:          unknown
	I0819 12:47:23.342586 2122632 command_runner.go:130] > SeccompEnabled:   true
	I0819 12:47:23.342591 2122632 command_runner.go:130] > AppArmorEnabled:  false
	I0819 12:47:23.342673 2122632 ssh_runner.go:195] Run: crio --version
	I0819 12:47:23.368381 2122632 command_runner.go:130] > crio version 1.29.1
	I0819 12:47:23.368401 2122632 command_runner.go:130] > Version:        1.29.1
	I0819 12:47:23.368410 2122632 command_runner.go:130] > GitCommit:      unknown
	I0819 12:47:23.368414 2122632 command_runner.go:130] > GitCommitDate:  unknown
	I0819 12:47:23.368418 2122632 command_runner.go:130] > GitTreeState:   clean
	I0819 12:47:23.368425 2122632 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 12:47:23.368429 2122632 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 12:47:23.368433 2122632 command_runner.go:130] > Compiler:       gc
	I0819 12:47:23.368438 2122632 command_runner.go:130] > Platform:       linux/amd64
	I0819 12:47:23.368442 2122632 command_runner.go:130] > Linkmode:       dynamic
	I0819 12:47:23.368446 2122632 command_runner.go:130] > BuildTags:      
	I0819 12:47:23.368450 2122632 command_runner.go:130] >   containers_image_ostree_stub
	I0819 12:47:23.368454 2122632 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 12:47:23.368459 2122632 command_runner.go:130] >   btrfs_noversion
	I0819 12:47:23.368465 2122632 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 12:47:23.368472 2122632 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 12:47:23.368478 2122632 command_runner.go:130] >   seccomp
	I0819 12:47:23.368484 2122632 command_runner.go:130] > LDFlags:          unknown
	I0819 12:47:23.368490 2122632 command_runner.go:130] > SeccompEnabled:   true
	I0819 12:47:23.368499 2122632 command_runner.go:130] > AppArmorEnabled:  false
	I0819 12:47:23.371945 2122632 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:47:23.372866 2122632 main.go:141] libmachine: (multinode-172739) Calling .GetIP
	I0819 12:47:23.375359 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:23.375725 2122632 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:47:23.375752 2122632 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:47:23.375946 2122632 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:47:23.380147 2122632 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0819 12:47:23.380267 2122632 kubeadm.go:883] updating cluster {Name:multinode-172739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-172739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.217 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:47:23.380443 2122632 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:47:23.380491 2122632 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:47:23.425487 2122632 command_runner.go:130] > {
	I0819 12:47:23.425519 2122632 command_runner.go:130] >   "images": [
	I0819 12:47:23.425527 2122632 command_runner.go:130] >     {
	I0819 12:47:23.425540 2122632 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 12:47:23.425548 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.425559 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 12:47:23.425565 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425573 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.425608 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 12:47:23.425624 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 12:47:23.425631 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425638 2122632 command_runner.go:130] >       "size": "87165492",
	I0819 12:47:23.425646 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.425652 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.425663 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.425670 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.425681 2122632 command_runner.go:130] >     },
	I0819 12:47:23.425687 2122632 command_runner.go:130] >     {
	I0819 12:47:23.425699 2122632 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 12:47:23.425708 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.425716 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 12:47:23.425724 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425728 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.425736 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 12:47:23.425745 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 12:47:23.425749 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425753 2122632 command_runner.go:130] >       "size": "87190579",
	I0819 12:47:23.425758 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.425766 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.425773 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.425778 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.425781 2122632 command_runner.go:130] >     },
	I0819 12:47:23.425787 2122632 command_runner.go:130] >     {
	I0819 12:47:23.425797 2122632 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 12:47:23.425802 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.425807 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 12:47:23.425811 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425818 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.425825 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 12:47:23.425832 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 12:47:23.425838 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425843 2122632 command_runner.go:130] >       "size": "1363676",
	I0819 12:47:23.425848 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.425854 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.425859 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.425866 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.425870 2122632 command_runner.go:130] >     },
	I0819 12:47:23.425877 2122632 command_runner.go:130] >     {
	I0819 12:47:23.425883 2122632 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 12:47:23.425890 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.425895 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 12:47:23.425902 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425906 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.425914 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 12:47:23.425927 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 12:47:23.425934 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425939 2122632 command_runner.go:130] >       "size": "31470524",
	I0819 12:47:23.425943 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.425947 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.425951 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.425956 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.425960 2122632 command_runner.go:130] >     },
	I0819 12:47:23.425963 2122632 command_runner.go:130] >     {
	I0819 12:47:23.425969 2122632 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 12:47:23.425975 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.425980 2122632 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 12:47:23.425986 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.425991 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.425997 2122632 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 12:47:23.426007 2122632 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 12:47:23.426013 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426017 2122632 command_runner.go:130] >       "size": "61245718",
	I0819 12:47:23.426024 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.426028 2122632 command_runner.go:130] >       "username": "nonroot",
	I0819 12:47:23.426035 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.426039 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.426043 2122632 command_runner.go:130] >     },
	I0819 12:47:23.426046 2122632 command_runner.go:130] >     {
	I0819 12:47:23.426053 2122632 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 12:47:23.426059 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.426066 2122632 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 12:47:23.426073 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426077 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.426087 2122632 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 12:47:23.426094 2122632 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 12:47:23.426100 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426104 2122632 command_runner.go:130] >       "size": "149009664",
	I0819 12:47:23.426111 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.426115 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.426121 2122632 command_runner.go:130] >       },
	I0819 12:47:23.426126 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.426129 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.426133 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.426137 2122632 command_runner.go:130] >     },
	I0819 12:47:23.426141 2122632 command_runner.go:130] >     {
	I0819 12:47:23.426147 2122632 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 12:47:23.426153 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.426158 2122632 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 12:47:23.426165 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426169 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.426181 2122632 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 12:47:23.426191 2122632 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 12:47:23.426194 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426199 2122632 command_runner.go:130] >       "size": "95233506",
	I0819 12:47:23.426203 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.426212 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.426217 2122632 command_runner.go:130] >       },
	I0819 12:47:23.426221 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.426224 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.426228 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.426232 2122632 command_runner.go:130] >     },
	I0819 12:47:23.426236 2122632 command_runner.go:130] >     {
	I0819 12:47:23.426242 2122632 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 12:47:23.426249 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.426254 2122632 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 12:47:23.426259 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426265 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.426284 2122632 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 12:47:23.426294 2122632 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 12:47:23.426301 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426305 2122632 command_runner.go:130] >       "size": "89437512",
	I0819 12:47:23.426309 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.426314 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.426325 2122632 command_runner.go:130] >       },
	I0819 12:47:23.426330 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.426334 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.426338 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.426342 2122632 command_runner.go:130] >     },
	I0819 12:47:23.426345 2122632 command_runner.go:130] >     {
	I0819 12:47:23.426351 2122632 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 12:47:23.426355 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.426361 2122632 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 12:47:23.426364 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426368 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.426375 2122632 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 12:47:23.426385 2122632 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 12:47:23.426390 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426396 2122632 command_runner.go:130] >       "size": "92728217",
	I0819 12:47:23.426401 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.426406 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.426410 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.426414 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.426418 2122632 command_runner.go:130] >     },
	I0819 12:47:23.426421 2122632 command_runner.go:130] >     {
	I0819 12:47:23.426427 2122632 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 12:47:23.426433 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.426439 2122632 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 12:47:23.426446 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426450 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.426463 2122632 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 12:47:23.426472 2122632 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 12:47:23.426479 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426484 2122632 command_runner.go:130] >       "size": "68420936",
	I0819 12:47:23.426491 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.426496 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.426499 2122632 command_runner.go:130] >       },
	I0819 12:47:23.426505 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.426512 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.426517 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.426520 2122632 command_runner.go:130] >     },
	I0819 12:47:23.426524 2122632 command_runner.go:130] >     {
	I0819 12:47:23.426530 2122632 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 12:47:23.426537 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.426542 2122632 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 12:47:23.426546 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426552 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.426567 2122632 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 12:47:23.426582 2122632 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 12:47:23.426593 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.426601 2122632 command_runner.go:130] >       "size": "742080",
	I0819 12:47:23.426612 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.426625 2122632 command_runner.go:130] >         "value": "65535"
	I0819 12:47:23.426635 2122632 command_runner.go:130] >       },
	I0819 12:47:23.426642 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.426649 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.426654 2122632 command_runner.go:130] >       "pinned": true
	I0819 12:47:23.426660 2122632 command_runner.go:130] >     }
	I0819 12:47:23.426663 2122632 command_runner.go:130] >   ]
	I0819 12:47:23.426667 2122632 command_runner.go:130] > }
	I0819 12:47:23.426899 2122632 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:47:23.426917 2122632 crio.go:433] Images already preloaded, skipping extraction
	I0819 12:47:23.426984 2122632 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:47:23.462359 2122632 command_runner.go:130] > {
	I0819 12:47:23.462394 2122632 command_runner.go:130] >   "images": [
	I0819 12:47:23.462399 2122632 command_runner.go:130] >     {
	I0819 12:47:23.462407 2122632 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 12:47:23.462413 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.462418 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 12:47:23.462422 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462426 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.462435 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 12:47:23.462441 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 12:47:23.462445 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462449 2122632 command_runner.go:130] >       "size": "87165492",
	I0819 12:47:23.462453 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.462457 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.462472 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.462486 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.462490 2122632 command_runner.go:130] >     },
	I0819 12:47:23.462493 2122632 command_runner.go:130] >     {
	I0819 12:47:23.462499 2122632 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 12:47:23.462503 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.462512 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 12:47:23.462518 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462522 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.462532 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 12:47:23.462539 2122632 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 12:47:23.462545 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462549 2122632 command_runner.go:130] >       "size": "87190579",
	I0819 12:47:23.462553 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.462564 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.462574 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.462581 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.462589 2122632 command_runner.go:130] >     },
	I0819 12:47:23.462595 2122632 command_runner.go:130] >     {
	I0819 12:47:23.462605 2122632 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 12:47:23.462614 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.462625 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 12:47:23.462633 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462639 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.462653 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 12:47:23.462668 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 12:47:23.462677 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462684 2122632 command_runner.go:130] >       "size": "1363676",
	I0819 12:47:23.462692 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.462697 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.462715 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.462721 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.462725 2122632 command_runner.go:130] >     },
	I0819 12:47:23.462731 2122632 command_runner.go:130] >     {
	I0819 12:47:23.462737 2122632 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 12:47:23.462741 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.462748 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 12:47:23.462754 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462758 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.462769 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 12:47:23.462789 2122632 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 12:47:23.462798 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462805 2122632 command_runner.go:130] >       "size": "31470524",
	I0819 12:47:23.462814 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.462824 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.462830 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.462838 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.462842 2122632 command_runner.go:130] >     },
	I0819 12:47:23.462846 2122632 command_runner.go:130] >     {
	I0819 12:47:23.462852 2122632 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 12:47:23.462859 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.462864 2122632 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 12:47:23.462869 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462873 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.462882 2122632 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 12:47:23.462892 2122632 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 12:47:23.462897 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462902 2122632 command_runner.go:130] >       "size": "61245718",
	I0819 12:47:23.462906 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.462911 2122632 command_runner.go:130] >       "username": "nonroot",
	I0819 12:47:23.462915 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.462920 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.462923 2122632 command_runner.go:130] >     },
	I0819 12:47:23.462927 2122632 command_runner.go:130] >     {
	I0819 12:47:23.462933 2122632 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 12:47:23.462937 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.462942 2122632 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 12:47:23.462946 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462950 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.462957 2122632 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 12:47:23.462966 2122632 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 12:47:23.462969 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.462974 2122632 command_runner.go:130] >       "size": "149009664",
	I0819 12:47:23.462980 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.462984 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.462991 2122632 command_runner.go:130] >       },
	I0819 12:47:23.462997 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.463001 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.463007 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.463011 2122632 command_runner.go:130] >     },
	I0819 12:47:23.463110 2122632 command_runner.go:130] >     {
	I0819 12:47:23.463116 2122632 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 12:47:23.463123 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.463127 2122632 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 12:47:23.463131 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463135 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.463142 2122632 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 12:47:23.463152 2122632 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 12:47:23.463155 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463163 2122632 command_runner.go:130] >       "size": "95233506",
	I0819 12:47:23.463168 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.463174 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.463183 2122632 command_runner.go:130] >       },
	I0819 12:47:23.463190 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.463198 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.463204 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.463211 2122632 command_runner.go:130] >     },
	I0819 12:47:23.463216 2122632 command_runner.go:130] >     {
	I0819 12:47:23.463229 2122632 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 12:47:23.463238 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.463246 2122632 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 12:47:23.463254 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463260 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.463285 2122632 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 12:47:23.463301 2122632 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 12:47:23.463308 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463315 2122632 command_runner.go:130] >       "size": "89437512",
	I0819 12:47:23.463323 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.463329 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.463333 2122632 command_runner.go:130] >       },
	I0819 12:47:23.463338 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.463346 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.463352 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.463360 2122632 command_runner.go:130] >     },
	I0819 12:47:23.463366 2122632 command_runner.go:130] >     {
	I0819 12:47:23.463376 2122632 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 12:47:23.463386 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.463397 2122632 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 12:47:23.463404 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463408 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.463415 2122632 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 12:47:23.463429 2122632 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 12:47:23.463435 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463440 2122632 command_runner.go:130] >       "size": "92728217",
	I0819 12:47:23.463446 2122632 command_runner.go:130] >       "uid": null,
	I0819 12:47:23.463452 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.463461 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.463468 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.463483 2122632 command_runner.go:130] >     },
	I0819 12:47:23.463491 2122632 command_runner.go:130] >     {
	I0819 12:47:23.463499 2122632 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 12:47:23.463505 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.463510 2122632 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 12:47:23.463516 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463520 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.463529 2122632 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 12:47:23.463540 2122632 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 12:47:23.463548 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463555 2122632 command_runner.go:130] >       "size": "68420936",
	I0819 12:47:23.463564 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.463571 2122632 command_runner.go:130] >         "value": "0"
	I0819 12:47:23.463579 2122632 command_runner.go:130] >       },
	I0819 12:47:23.463585 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.463594 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.463602 2122632 command_runner.go:130] >       "pinned": false
	I0819 12:47:23.463611 2122632 command_runner.go:130] >     },
	I0819 12:47:23.463616 2122632 command_runner.go:130] >     {
	I0819 12:47:23.463628 2122632 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 12:47:23.463637 2122632 command_runner.go:130] >       "repoTags": [
	I0819 12:47:23.463645 2122632 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 12:47:23.463653 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463659 2122632 command_runner.go:130] >       "repoDigests": [
	I0819 12:47:23.463673 2122632 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 12:47:23.463687 2122632 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 12:47:23.463695 2122632 command_runner.go:130] >       ],
	I0819 12:47:23.463702 2122632 command_runner.go:130] >       "size": "742080",
	I0819 12:47:23.463710 2122632 command_runner.go:130] >       "uid": {
	I0819 12:47:23.463717 2122632 command_runner.go:130] >         "value": "65535"
	I0819 12:47:23.463725 2122632 command_runner.go:130] >       },
	I0819 12:47:23.463731 2122632 command_runner.go:130] >       "username": "",
	I0819 12:47:23.463743 2122632 command_runner.go:130] >       "spec": null,
	I0819 12:47:23.463753 2122632 command_runner.go:130] >       "pinned": true
	I0819 12:47:23.463758 2122632 command_runner.go:130] >     }
	I0819 12:47:23.463767 2122632 command_runner.go:130] >   ]
	I0819 12:47:23.463772 2122632 command_runner.go:130] > }
	I0819 12:47:23.463926 2122632 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:47:23.463944 2122632 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:47:23.463959 2122632 kubeadm.go:934] updating node { 192.168.39.191 8443 v1.31.0 crio true true} ...
	I0819 12:47:23.464085 2122632 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-172739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-172739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:47:23.464156 2122632 ssh_runner.go:195] Run: crio config
	I0819 12:47:23.496728 2122632 command_runner.go:130] ! time="2024-08-19 12:47:23.476909496Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0819 12:47:23.502906 2122632 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0819 12:47:23.507650 2122632 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0819 12:47:23.507676 2122632 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0819 12:47:23.507683 2122632 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0819 12:47:23.507687 2122632 command_runner.go:130] > #
	I0819 12:47:23.507706 2122632 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0819 12:47:23.507717 2122632 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0819 12:47:23.507726 2122632 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0819 12:47:23.507740 2122632 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0819 12:47:23.507745 2122632 command_runner.go:130] > # reload'.
	I0819 12:47:23.507754 2122632 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0819 12:47:23.507767 2122632 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0819 12:47:23.507780 2122632 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0819 12:47:23.507790 2122632 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0819 12:47:23.507796 2122632 command_runner.go:130] > [crio]
	I0819 12:47:23.507808 2122632 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0819 12:47:23.507816 2122632 command_runner.go:130] > # containers images, in this directory.
	I0819 12:47:23.507826 2122632 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0819 12:47:23.507839 2122632 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0819 12:47:23.507847 2122632 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0819 12:47:23.507855 2122632 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0819 12:47:23.507862 2122632 command_runner.go:130] > # imagestore = ""
	I0819 12:47:23.507869 2122632 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0819 12:47:23.507877 2122632 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0819 12:47:23.507883 2122632 command_runner.go:130] > storage_driver = "overlay"
	I0819 12:47:23.507891 2122632 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0819 12:47:23.507897 2122632 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0819 12:47:23.507907 2122632 command_runner.go:130] > storage_option = [
	I0819 12:47:23.507913 2122632 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0819 12:47:23.507917 2122632 command_runner.go:130] > ]
	I0819 12:47:23.507923 2122632 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0819 12:47:23.507931 2122632 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0819 12:47:23.507936 2122632 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0819 12:47:23.507941 2122632 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0819 12:47:23.507949 2122632 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0819 12:47:23.507954 2122632 command_runner.go:130] > # always happen on a node reboot
	I0819 12:47:23.507960 2122632 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0819 12:47:23.507989 2122632 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0819 12:47:23.508002 2122632 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0819 12:47:23.508007 2122632 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0819 12:47:23.508014 2122632 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0819 12:47:23.508021 2122632 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0819 12:47:23.508030 2122632 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0819 12:47:23.508035 2122632 command_runner.go:130] > # internal_wipe = true
	I0819 12:47:23.508043 2122632 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0819 12:47:23.508050 2122632 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0819 12:47:23.508054 2122632 command_runner.go:130] > # internal_repair = false
	I0819 12:47:23.508062 2122632 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0819 12:47:23.508068 2122632 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0819 12:47:23.508075 2122632 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0819 12:47:23.508080 2122632 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0819 12:47:23.508088 2122632 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0819 12:47:23.508092 2122632 command_runner.go:130] > [crio.api]
	I0819 12:47:23.508097 2122632 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0819 12:47:23.508104 2122632 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0819 12:47:23.508109 2122632 command_runner.go:130] > # IP address on which the stream server will listen.
	I0819 12:47:23.508114 2122632 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0819 12:47:23.508120 2122632 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0819 12:47:23.508127 2122632 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0819 12:47:23.508131 2122632 command_runner.go:130] > # stream_port = "0"
	I0819 12:47:23.508140 2122632 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0819 12:47:23.508144 2122632 command_runner.go:130] > # stream_enable_tls = false
	I0819 12:47:23.508152 2122632 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0819 12:47:23.508156 2122632 command_runner.go:130] > # stream_idle_timeout = ""
	I0819 12:47:23.508182 2122632 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0819 12:47:23.508195 2122632 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0819 12:47:23.508200 2122632 command_runner.go:130] > # minutes.
	I0819 12:47:23.508207 2122632 command_runner.go:130] > # stream_tls_cert = ""
	I0819 12:47:23.508213 2122632 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0819 12:47:23.508223 2122632 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0819 12:47:23.508229 2122632 command_runner.go:130] > # stream_tls_key = ""
	I0819 12:47:23.508235 2122632 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0819 12:47:23.508243 2122632 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0819 12:47:23.508257 2122632 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0819 12:47:23.508263 2122632 command_runner.go:130] > # stream_tls_ca = ""
	I0819 12:47:23.508270 2122632 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 12:47:23.508276 2122632 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0819 12:47:23.508283 2122632 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 12:47:23.508289 2122632 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0819 12:47:23.508295 2122632 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0819 12:47:23.508302 2122632 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0819 12:47:23.508306 2122632 command_runner.go:130] > [crio.runtime]
	I0819 12:47:23.508312 2122632 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0819 12:47:23.508317 2122632 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0819 12:47:23.508321 2122632 command_runner.go:130] > # "nofile=1024:2048"
	I0819 12:47:23.508327 2122632 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0819 12:47:23.508333 2122632 command_runner.go:130] > # default_ulimits = [
	I0819 12:47:23.508337 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.508345 2122632 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0819 12:47:23.508351 2122632 command_runner.go:130] > # no_pivot = false
	I0819 12:47:23.508357 2122632 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0819 12:47:23.508363 2122632 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0819 12:47:23.508368 2122632 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0819 12:47:23.508373 2122632 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0819 12:47:23.508380 2122632 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0819 12:47:23.508386 2122632 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 12:47:23.508393 2122632 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0819 12:47:23.508398 2122632 command_runner.go:130] > # Cgroup setting for conmon
	I0819 12:47:23.508406 2122632 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0819 12:47:23.508411 2122632 command_runner.go:130] > conmon_cgroup = "pod"
	I0819 12:47:23.508418 2122632 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0819 12:47:23.508424 2122632 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0819 12:47:23.508434 2122632 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 12:47:23.508439 2122632 command_runner.go:130] > conmon_env = [
	I0819 12:47:23.508445 2122632 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 12:47:23.508448 2122632 command_runner.go:130] > ]
	I0819 12:47:23.508453 2122632 command_runner.go:130] > # Additional environment variables to set for all the
	I0819 12:47:23.508460 2122632 command_runner.go:130] > # containers. These are overridden if set in the
	I0819 12:47:23.508466 2122632 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0819 12:47:23.508479 2122632 command_runner.go:130] > # default_env = [
	I0819 12:47:23.508484 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.508490 2122632 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0819 12:47:23.508497 2122632 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0819 12:47:23.508503 2122632 command_runner.go:130] > # selinux = false
	I0819 12:47:23.508510 2122632 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0819 12:47:23.508518 2122632 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0819 12:47:23.508524 2122632 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0819 12:47:23.508528 2122632 command_runner.go:130] > # seccomp_profile = ""
	I0819 12:47:23.508533 2122632 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0819 12:47:23.508539 2122632 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0819 12:47:23.508547 2122632 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0819 12:47:23.508551 2122632 command_runner.go:130] > # which might increase security.
	I0819 12:47:23.508555 2122632 command_runner.go:130] > # This option is currently deprecated,
	I0819 12:47:23.508566 2122632 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0819 12:47:23.508576 2122632 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0819 12:47:23.508586 2122632 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0819 12:47:23.508598 2122632 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0819 12:47:23.508610 2122632 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0819 12:47:23.508622 2122632 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0819 12:47:23.508630 2122632 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:47:23.508639 2122632 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0819 12:47:23.508648 2122632 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0819 12:47:23.508657 2122632 command_runner.go:130] > # the cgroup blockio controller.
	I0819 12:47:23.508661 2122632 command_runner.go:130] > # blockio_config_file = ""
	I0819 12:47:23.508670 2122632 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0819 12:47:23.508675 2122632 command_runner.go:130] > # blockio parameters.
	I0819 12:47:23.508681 2122632 command_runner.go:130] > # blockio_reload = false
	I0819 12:47:23.508687 2122632 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0819 12:47:23.508693 2122632 command_runner.go:130] > # irqbalance daemon.
	I0819 12:47:23.508698 2122632 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0819 12:47:23.508709 2122632 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0819 12:47:23.508718 2122632 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0819 12:47:23.508724 2122632 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0819 12:47:23.508732 2122632 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0819 12:47:23.508738 2122632 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0819 12:47:23.508746 2122632 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:47:23.508750 2122632 command_runner.go:130] > # rdt_config_file = ""
	I0819 12:47:23.508758 2122632 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0819 12:47:23.508764 2122632 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0819 12:47:23.508784 2122632 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0819 12:47:23.508791 2122632 command_runner.go:130] > # separate_pull_cgroup = ""
	I0819 12:47:23.508797 2122632 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0819 12:47:23.508804 2122632 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0819 12:47:23.508808 2122632 command_runner.go:130] > # will be added.
	I0819 12:47:23.508812 2122632 command_runner.go:130] > # default_capabilities = [
	I0819 12:47:23.508815 2122632 command_runner.go:130] > # 	"CHOWN",
	I0819 12:47:23.508821 2122632 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0819 12:47:23.508827 2122632 command_runner.go:130] > # 	"FSETID",
	I0819 12:47:23.508830 2122632 command_runner.go:130] > # 	"FOWNER",
	I0819 12:47:23.508834 2122632 command_runner.go:130] > # 	"SETGID",
	I0819 12:47:23.508837 2122632 command_runner.go:130] > # 	"SETUID",
	I0819 12:47:23.508841 2122632 command_runner.go:130] > # 	"SETPCAP",
	I0819 12:47:23.508845 2122632 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0819 12:47:23.508848 2122632 command_runner.go:130] > # 	"KILL",
	I0819 12:47:23.508852 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.508859 2122632 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0819 12:47:23.508867 2122632 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0819 12:47:23.508872 2122632 command_runner.go:130] > # add_inheritable_capabilities = false
	I0819 12:47:23.508881 2122632 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0819 12:47:23.508889 2122632 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 12:47:23.508894 2122632 command_runner.go:130] > default_sysctls = [
	I0819 12:47:23.508899 2122632 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0819 12:47:23.508902 2122632 command_runner.go:130] > ]
	I0819 12:47:23.508907 2122632 command_runner.go:130] > # List of devices on the host that a
	I0819 12:47:23.508912 2122632 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0819 12:47:23.508918 2122632 command_runner.go:130] > # allowed_devices = [
	I0819 12:47:23.508922 2122632 command_runner.go:130] > # 	"/dev/fuse",
	I0819 12:47:23.508925 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.508930 2122632 command_runner.go:130] > # List of additional devices. specified as
	I0819 12:47:23.508939 2122632 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0819 12:47:23.508944 2122632 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0819 12:47:23.508954 2122632 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 12:47:23.508959 2122632 command_runner.go:130] > # additional_devices = [
	I0819 12:47:23.508962 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.508969 2122632 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0819 12:47:23.508973 2122632 command_runner.go:130] > # cdi_spec_dirs = [
	I0819 12:47:23.508979 2122632 command_runner.go:130] > # 	"/etc/cdi",
	I0819 12:47:23.508983 2122632 command_runner.go:130] > # 	"/var/run/cdi",
	I0819 12:47:23.508986 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.508991 2122632 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0819 12:47:23.508999 2122632 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0819 12:47:23.509003 2122632 command_runner.go:130] > # Defaults to false.
	I0819 12:47:23.509010 2122632 command_runner.go:130] > # device_ownership_from_security_context = false
	I0819 12:47:23.509016 2122632 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0819 12:47:23.509025 2122632 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0819 12:47:23.509031 2122632 command_runner.go:130] > # hooks_dir = [
	I0819 12:47:23.509036 2122632 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0819 12:47:23.509040 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.509046 2122632 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0819 12:47:23.509054 2122632 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0819 12:47:23.509059 2122632 command_runner.go:130] > # its default mounts from the following two files:
	I0819 12:47:23.509062 2122632 command_runner.go:130] > #
	I0819 12:47:23.509068 2122632 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0819 12:47:23.509076 2122632 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0819 12:47:23.509082 2122632 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0819 12:47:23.509088 2122632 command_runner.go:130] > #
	I0819 12:47:23.509093 2122632 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0819 12:47:23.509102 2122632 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0819 12:47:23.509108 2122632 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0819 12:47:23.509114 2122632 command_runner.go:130] > #      only add mounts it finds in this file.
	I0819 12:47:23.509118 2122632 command_runner.go:130] > #
	I0819 12:47:23.509124 2122632 command_runner.go:130] > # default_mounts_file = ""
	I0819 12:47:23.509129 2122632 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0819 12:47:23.509137 2122632 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0819 12:47:23.509141 2122632 command_runner.go:130] > pids_limit = 1024
	I0819 12:47:23.509146 2122632 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0819 12:47:23.509154 2122632 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0819 12:47:23.509161 2122632 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0819 12:47:23.509170 2122632 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0819 12:47:23.509174 2122632 command_runner.go:130] > # log_size_max = -1
	I0819 12:47:23.509180 2122632 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0819 12:47:23.509189 2122632 command_runner.go:130] > # log_to_journald = false
	I0819 12:47:23.509195 2122632 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0819 12:47:23.509200 2122632 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0819 12:47:23.509205 2122632 command_runner.go:130] > # Path to directory for container attach sockets.
	I0819 12:47:23.509212 2122632 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0819 12:47:23.509218 2122632 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0819 12:47:23.509225 2122632 command_runner.go:130] > # bind_mount_prefix = ""
	I0819 12:47:23.509230 2122632 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0819 12:47:23.509235 2122632 command_runner.go:130] > # read_only = false
	I0819 12:47:23.509242 2122632 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0819 12:47:23.509248 2122632 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0819 12:47:23.509254 2122632 command_runner.go:130] > # live configuration reload.
	I0819 12:47:23.509258 2122632 command_runner.go:130] > # log_level = "info"
	I0819 12:47:23.509263 2122632 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0819 12:47:23.509270 2122632 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:47:23.509274 2122632 command_runner.go:130] > # log_filter = ""
	I0819 12:47:23.509281 2122632 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0819 12:47:23.509288 2122632 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0819 12:47:23.509294 2122632 command_runner.go:130] > # separated by comma.
	I0819 12:47:23.509303 2122632 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:47:23.509309 2122632 command_runner.go:130] > # uid_mappings = ""
	I0819 12:47:23.509315 2122632 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0819 12:47:23.509320 2122632 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0819 12:47:23.509325 2122632 command_runner.go:130] > # separated by comma.
	I0819 12:47:23.509332 2122632 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:47:23.509338 2122632 command_runner.go:130] > # gid_mappings = ""
	I0819 12:47:23.509343 2122632 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0819 12:47:23.509350 2122632 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 12:47:23.509356 2122632 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 12:47:23.509365 2122632 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:47:23.509369 2122632 command_runner.go:130] > # minimum_mappable_uid = -1
	I0819 12:47:23.509375 2122632 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0819 12:47:23.509383 2122632 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 12:47:23.509389 2122632 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 12:47:23.509398 2122632 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:47:23.509405 2122632 command_runner.go:130] > # minimum_mappable_gid = -1
	I0819 12:47:23.509411 2122632 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0819 12:47:23.509416 2122632 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0819 12:47:23.509424 2122632 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0819 12:47:23.509428 2122632 command_runner.go:130] > # ctr_stop_timeout = 30
	I0819 12:47:23.509435 2122632 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0819 12:47:23.509441 2122632 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0819 12:47:23.509448 2122632 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0819 12:47:23.509453 2122632 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0819 12:47:23.509457 2122632 command_runner.go:130] > drop_infra_ctr = false
	I0819 12:47:23.509463 2122632 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0819 12:47:23.509474 2122632 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0819 12:47:23.509480 2122632 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0819 12:47:23.509486 2122632 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0819 12:47:23.509493 2122632 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0819 12:47:23.509500 2122632 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0819 12:47:23.509506 2122632 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0819 12:47:23.509513 2122632 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0819 12:47:23.509517 2122632 command_runner.go:130] > # shared_cpuset = ""
	I0819 12:47:23.509525 2122632 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0819 12:47:23.509531 2122632 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0819 12:47:23.509537 2122632 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0819 12:47:23.509544 2122632 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0819 12:47:23.509550 2122632 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0819 12:47:23.509555 2122632 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0819 12:47:23.509567 2122632 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0819 12:47:23.509573 2122632 command_runner.go:130] > # enable_criu_support = false
	I0819 12:47:23.509582 2122632 command_runner.go:130] > # Enable/disable the generation of the container,
	I0819 12:47:23.509594 2122632 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0819 12:47:23.509603 2122632 command_runner.go:130] > # enable_pod_events = false
	I0819 12:47:23.509612 2122632 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 12:47:23.509624 2122632 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 12:47:23.509635 2122632 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0819 12:47:23.509644 2122632 command_runner.go:130] > # default_runtime = "runc"
	I0819 12:47:23.509653 2122632 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0819 12:47:23.509663 2122632 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0819 12:47:23.509671 2122632 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0819 12:47:23.509682 2122632 command_runner.go:130] > # creation as a file is not desired either.
	I0819 12:47:23.509692 2122632 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0819 12:47:23.509699 2122632 command_runner.go:130] > # the hostname is being managed dynamically.
	I0819 12:47:23.509704 2122632 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0819 12:47:23.509709 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.509715 2122632 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0819 12:47:23.509723 2122632 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0819 12:47:23.509729 2122632 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0819 12:47:23.509736 2122632 command_runner.go:130] > # Each entry in the table should follow the format:
	I0819 12:47:23.509739 2122632 command_runner.go:130] > #
	I0819 12:47:23.509744 2122632 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0819 12:47:23.509751 2122632 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0819 12:47:23.509773 2122632 command_runner.go:130] > # runtime_type = "oci"
	I0819 12:47:23.509780 2122632 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0819 12:47:23.509784 2122632 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0819 12:47:23.509789 2122632 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0819 12:47:23.509794 2122632 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0819 12:47:23.509798 2122632 command_runner.go:130] > # monitor_env = []
	I0819 12:47:23.509803 2122632 command_runner.go:130] > # privileged_without_host_devices = false
	I0819 12:47:23.509809 2122632 command_runner.go:130] > # allowed_annotations = []
	I0819 12:47:23.509814 2122632 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0819 12:47:23.509819 2122632 command_runner.go:130] > # Where:
	I0819 12:47:23.509824 2122632 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0819 12:47:23.509831 2122632 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0819 12:47:23.509837 2122632 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0819 12:47:23.509845 2122632 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0819 12:47:23.509849 2122632 command_runner.go:130] > #   in $PATH.
	I0819 12:47:23.509855 2122632 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0819 12:47:23.509861 2122632 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0819 12:47:23.509868 2122632 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0819 12:47:23.509873 2122632 command_runner.go:130] > #   state.
	I0819 12:47:23.509879 2122632 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0819 12:47:23.509887 2122632 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0819 12:47:23.509908 2122632 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0819 12:47:23.509922 2122632 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0819 12:47:23.509928 2122632 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0819 12:47:23.509937 2122632 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0819 12:47:23.509945 2122632 command_runner.go:130] > #   The currently recognized values are:
	I0819 12:47:23.509953 2122632 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0819 12:47:23.509961 2122632 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0819 12:47:23.509969 2122632 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0819 12:47:23.509975 2122632 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0819 12:47:23.509982 2122632 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0819 12:47:23.509990 2122632 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0819 12:47:23.509998 2122632 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0819 12:47:23.510004 2122632 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0819 12:47:23.510012 2122632 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0819 12:47:23.510017 2122632 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0819 12:47:23.510024 2122632 command_runner.go:130] > #   deprecated option "conmon".
	I0819 12:47:23.510031 2122632 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0819 12:47:23.510038 2122632 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0819 12:47:23.510044 2122632 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0819 12:47:23.510051 2122632 command_runner.go:130] > #   should be moved to the container's cgroup
	I0819 12:47:23.510057 2122632 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0819 12:47:23.510061 2122632 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0819 12:47:23.510069 2122632 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0819 12:47:23.510074 2122632 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0819 12:47:23.510077 2122632 command_runner.go:130] > #
	I0819 12:47:23.510081 2122632 command_runner.go:130] > # Using the seccomp notifier feature:
	I0819 12:47:23.510084 2122632 command_runner.go:130] > #
	I0819 12:47:23.510089 2122632 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0819 12:47:23.510095 2122632 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0819 12:47:23.510098 2122632 command_runner.go:130] > #
	I0819 12:47:23.510104 2122632 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0819 12:47:23.510109 2122632 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0819 12:47:23.510112 2122632 command_runner.go:130] > #
	I0819 12:47:23.510117 2122632 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0819 12:47:23.510121 2122632 command_runner.go:130] > # feature.
	I0819 12:47:23.510123 2122632 command_runner.go:130] > #
	I0819 12:47:23.510129 2122632 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0819 12:47:23.510134 2122632 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0819 12:47:23.510140 2122632 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0819 12:47:23.510151 2122632 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0819 12:47:23.510157 2122632 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0819 12:47:23.510159 2122632 command_runner.go:130] > #
	I0819 12:47:23.510165 2122632 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0819 12:47:23.510170 2122632 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0819 12:47:23.510173 2122632 command_runner.go:130] > #
	I0819 12:47:23.510178 2122632 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0819 12:47:23.510185 2122632 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0819 12:47:23.510188 2122632 command_runner.go:130] > #
	I0819 12:47:23.510194 2122632 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0819 12:47:23.510199 2122632 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0819 12:47:23.510203 2122632 command_runner.go:130] > # limitation.
	I0819 12:47:23.510208 2122632 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0819 12:47:23.510214 2122632 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0819 12:47:23.510218 2122632 command_runner.go:130] > runtime_type = "oci"
	I0819 12:47:23.510223 2122632 command_runner.go:130] > runtime_root = "/run/runc"
	I0819 12:47:23.510230 2122632 command_runner.go:130] > runtime_config_path = ""
	I0819 12:47:23.510236 2122632 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0819 12:47:23.510242 2122632 command_runner.go:130] > monitor_cgroup = "pod"
	I0819 12:47:23.510247 2122632 command_runner.go:130] > monitor_exec_cgroup = ""
	I0819 12:47:23.510253 2122632 command_runner.go:130] > monitor_env = [
	I0819 12:47:23.510259 2122632 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 12:47:23.510264 2122632 command_runner.go:130] > ]
	I0819 12:47:23.510269 2122632 command_runner.go:130] > privileged_without_host_devices = false
	I0819 12:47:23.510275 2122632 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0819 12:47:23.510282 2122632 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0819 12:47:23.510288 2122632 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0819 12:47:23.510297 2122632 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0819 12:47:23.510304 2122632 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0819 12:47:23.510311 2122632 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0819 12:47:23.510320 2122632 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0819 12:47:23.510330 2122632 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0819 12:47:23.510336 2122632 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0819 12:47:23.510342 2122632 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0819 12:47:23.510345 2122632 command_runner.go:130] > # Example:
	I0819 12:47:23.510350 2122632 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0819 12:47:23.510354 2122632 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0819 12:47:23.510362 2122632 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0819 12:47:23.510366 2122632 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0819 12:47:23.510370 2122632 command_runner.go:130] > # cpuset = 0
	I0819 12:47:23.510373 2122632 command_runner.go:130] > # cpushares = "0-1"
	I0819 12:47:23.510376 2122632 command_runner.go:130] > # Where:
	I0819 12:47:23.510380 2122632 command_runner.go:130] > # The workload name is workload-type.
	I0819 12:47:23.510387 2122632 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0819 12:47:23.510392 2122632 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0819 12:47:23.510397 2122632 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0819 12:47:23.510405 2122632 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0819 12:47:23.510410 2122632 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0819 12:47:23.510414 2122632 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0819 12:47:23.510420 2122632 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0819 12:47:23.510424 2122632 command_runner.go:130] > # Default value is set to true
	I0819 12:47:23.510428 2122632 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0819 12:47:23.510433 2122632 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0819 12:47:23.510438 2122632 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0819 12:47:23.510442 2122632 command_runner.go:130] > # Default value is set to 'false'
	I0819 12:47:23.510447 2122632 command_runner.go:130] > # disable_hostport_mapping = false
	I0819 12:47:23.510453 2122632 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0819 12:47:23.510456 2122632 command_runner.go:130] > #
	I0819 12:47:23.510461 2122632 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0819 12:47:23.510466 2122632 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0819 12:47:23.510476 2122632 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0819 12:47:23.510481 2122632 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0819 12:47:23.510486 2122632 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0819 12:47:23.510489 2122632 command_runner.go:130] > [crio.image]
	I0819 12:47:23.510495 2122632 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0819 12:47:23.510499 2122632 command_runner.go:130] > # default_transport = "docker://"
	I0819 12:47:23.510504 2122632 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0819 12:47:23.510510 2122632 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0819 12:47:23.510516 2122632 command_runner.go:130] > # global_auth_file = ""
	I0819 12:47:23.510520 2122632 command_runner.go:130] > # The image used to instantiate infra containers.
	I0819 12:47:23.510525 2122632 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:47:23.510529 2122632 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0819 12:47:23.510536 2122632 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0819 12:47:23.510541 2122632 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0819 12:47:23.510546 2122632 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:47:23.510552 2122632 command_runner.go:130] > # pause_image_auth_file = ""
	I0819 12:47:23.510559 2122632 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0819 12:47:23.510568 2122632 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0819 12:47:23.510577 2122632 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0819 12:47:23.510586 2122632 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0819 12:47:23.510595 2122632 command_runner.go:130] > # pause_command = "/pause"
	I0819 12:47:23.510604 2122632 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0819 12:47:23.510616 2122632 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0819 12:47:23.510627 2122632 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0819 12:47:23.510639 2122632 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0819 12:47:23.510652 2122632 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0819 12:47:23.510661 2122632 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0819 12:47:23.510665 2122632 command_runner.go:130] > # pinned_images = [
	I0819 12:47:23.510668 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.510674 2122632 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0819 12:47:23.510682 2122632 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0819 12:47:23.510690 2122632 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0819 12:47:23.510698 2122632 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0819 12:47:23.510704 2122632 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0819 12:47:23.510710 2122632 command_runner.go:130] > # signature_policy = ""
	I0819 12:47:23.510715 2122632 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0819 12:47:23.510724 2122632 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0819 12:47:23.510730 2122632 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0819 12:47:23.510736 2122632 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0819 12:47:23.510742 2122632 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0819 12:47:23.510748 2122632 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0819 12:47:23.510754 2122632 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0819 12:47:23.510762 2122632 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0819 12:47:23.510766 2122632 command_runner.go:130] > # changing them here.
	I0819 12:47:23.510772 2122632 command_runner.go:130] > # insecure_registries = [
	I0819 12:47:23.510776 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.510782 2122632 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0819 12:47:23.510789 2122632 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0819 12:47:23.510793 2122632 command_runner.go:130] > # image_volumes = "mkdir"
	I0819 12:47:23.510799 2122632 command_runner.go:130] > # Temporary directory to use for storing big files
	I0819 12:47:23.510804 2122632 command_runner.go:130] > # big_files_temporary_dir = ""
	I0819 12:47:23.510815 2122632 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0819 12:47:23.510818 2122632 command_runner.go:130] > # CNI plugins.
	I0819 12:47:23.510822 2122632 command_runner.go:130] > [crio.network]
	I0819 12:47:23.510827 2122632 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0819 12:47:23.510833 2122632 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0819 12:47:23.510837 2122632 command_runner.go:130] > # cni_default_network = ""
	I0819 12:47:23.510842 2122632 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0819 12:47:23.510849 2122632 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0819 12:47:23.510855 2122632 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0819 12:47:23.510861 2122632 command_runner.go:130] > # plugin_dirs = [
	I0819 12:47:23.510864 2122632 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0819 12:47:23.510867 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.510873 2122632 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0819 12:47:23.510877 2122632 command_runner.go:130] > [crio.metrics]
	I0819 12:47:23.510882 2122632 command_runner.go:130] > # Globally enable or disable metrics support.
	I0819 12:47:23.510888 2122632 command_runner.go:130] > enable_metrics = true
	I0819 12:47:23.510894 2122632 command_runner.go:130] > # Specify enabled metrics collectors.
	I0819 12:47:23.510899 2122632 command_runner.go:130] > # Per default all metrics are enabled.
	I0819 12:47:23.510905 2122632 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0819 12:47:23.510913 2122632 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0819 12:47:23.510920 2122632 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0819 12:47:23.510927 2122632 command_runner.go:130] > # metrics_collectors = [
	I0819 12:47:23.510931 2122632 command_runner.go:130] > # 	"operations",
	I0819 12:47:23.510935 2122632 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0819 12:47:23.510942 2122632 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0819 12:47:23.510946 2122632 command_runner.go:130] > # 	"operations_errors",
	I0819 12:47:23.510950 2122632 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0819 12:47:23.510954 2122632 command_runner.go:130] > # 	"image_pulls_by_name",
	I0819 12:47:23.510958 2122632 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0819 12:47:23.510963 2122632 command_runner.go:130] > # 	"image_pulls_failures",
	I0819 12:47:23.510967 2122632 command_runner.go:130] > # 	"image_pulls_successes",
	I0819 12:47:23.510974 2122632 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0819 12:47:23.510978 2122632 command_runner.go:130] > # 	"image_layer_reuse",
	I0819 12:47:23.510984 2122632 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0819 12:47:23.510988 2122632 command_runner.go:130] > # 	"containers_oom_total",
	I0819 12:47:23.510994 2122632 command_runner.go:130] > # 	"containers_oom",
	I0819 12:47:23.510998 2122632 command_runner.go:130] > # 	"processes_defunct",
	I0819 12:47:23.511004 2122632 command_runner.go:130] > # 	"operations_total",
	I0819 12:47:23.511008 2122632 command_runner.go:130] > # 	"operations_latency_seconds",
	I0819 12:47:23.511013 2122632 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0819 12:47:23.511016 2122632 command_runner.go:130] > # 	"operations_errors_total",
	I0819 12:47:23.511020 2122632 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0819 12:47:23.511025 2122632 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0819 12:47:23.511028 2122632 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0819 12:47:23.511033 2122632 command_runner.go:130] > # 	"image_pulls_success_total",
	I0819 12:47:23.511040 2122632 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0819 12:47:23.511046 2122632 command_runner.go:130] > # 	"containers_oom_count_total",
	I0819 12:47:23.511050 2122632 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0819 12:47:23.511057 2122632 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0819 12:47:23.511061 2122632 command_runner.go:130] > # ]
	I0819 12:47:23.511068 2122632 command_runner.go:130] > # The port on which the metrics server will listen.
	I0819 12:47:23.511072 2122632 command_runner.go:130] > # metrics_port = 9090
	I0819 12:47:23.511081 2122632 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0819 12:47:23.511085 2122632 command_runner.go:130] > # metrics_socket = ""
	I0819 12:47:23.511090 2122632 command_runner.go:130] > # The certificate for the secure metrics server.
	I0819 12:47:23.511098 2122632 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0819 12:47:23.511105 2122632 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0819 12:47:23.511110 2122632 command_runner.go:130] > # certificate on any modification event.
	I0819 12:47:23.511115 2122632 command_runner.go:130] > # metrics_cert = ""
	I0819 12:47:23.511121 2122632 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0819 12:47:23.511128 2122632 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0819 12:47:23.511132 2122632 command_runner.go:130] > # metrics_key = ""
	I0819 12:47:23.511138 2122632 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0819 12:47:23.511142 2122632 command_runner.go:130] > [crio.tracing]
	I0819 12:47:23.511147 2122632 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0819 12:47:23.511153 2122632 command_runner.go:130] > # enable_tracing = false
	I0819 12:47:23.511158 2122632 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0819 12:47:23.511164 2122632 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0819 12:47:23.511171 2122632 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0819 12:47:23.511176 2122632 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0819 12:47:23.511180 2122632 command_runner.go:130] > # CRI-O NRI configuration.
	I0819 12:47:23.511186 2122632 command_runner.go:130] > [crio.nri]
	I0819 12:47:23.511190 2122632 command_runner.go:130] > # Globally enable or disable NRI.
	I0819 12:47:23.511194 2122632 command_runner.go:130] > # enable_nri = false
	I0819 12:47:23.511197 2122632 command_runner.go:130] > # NRI socket to listen on.
	I0819 12:47:23.511202 2122632 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0819 12:47:23.511206 2122632 command_runner.go:130] > # NRI plugin directory to use.
	I0819 12:47:23.511210 2122632 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0819 12:47:23.511217 2122632 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0819 12:47:23.511223 2122632 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0819 12:47:23.511231 2122632 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0819 12:47:23.511235 2122632 command_runner.go:130] > # nri_disable_connections = false
	I0819 12:47:23.511240 2122632 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0819 12:47:23.511247 2122632 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0819 12:47:23.511252 2122632 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0819 12:47:23.511258 2122632 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0819 12:47:23.511264 2122632 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0819 12:47:23.511270 2122632 command_runner.go:130] > [crio.stats]
	I0819 12:47:23.511278 2122632 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0819 12:47:23.511285 2122632 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0819 12:47:23.511289 2122632 command_runner.go:130] > # stats_collection_period = 0
	I0819 12:47:23.511411 2122632 cni.go:84] Creating CNI manager for ""
	I0819 12:47:23.511421 2122632 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 12:47:23.511430 2122632 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:47:23.511456 2122632 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.191 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-172739 NodeName:multinode-172739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:47:23.511606 2122632 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-172739"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:47:23.511686 2122632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:47:23.522060 2122632 command_runner.go:130] > kubeadm
	I0819 12:47:23.522083 2122632 command_runner.go:130] > kubectl
	I0819 12:47:23.522087 2122632 command_runner.go:130] > kubelet
	I0819 12:47:23.522111 2122632 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:47:23.522159 2122632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 12:47:23.531890 2122632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0819 12:47:23.548391 2122632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:47:23.564392 2122632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0819 12:47:23.580579 2122632 ssh_runner.go:195] Run: grep 192.168.39.191	control-plane.minikube.internal$ /etc/hosts
	I0819 12:47:23.584623 2122632 command_runner.go:130] > 192.168.39.191	control-plane.minikube.internal
	I0819 12:47:23.584873 2122632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:47:23.727462 2122632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:47:23.742012 2122632 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739 for IP: 192.168.39.191
	I0819 12:47:23.742039 2122632 certs.go:194] generating shared ca certs ...
	I0819 12:47:23.742056 2122632 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:47:23.742220 2122632 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 12:47:23.742259 2122632 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 12:47:23.742281 2122632 certs.go:256] generating profile certs ...
	I0819 12:47:23.742366 2122632 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/client.key
	I0819 12:47:23.742422 2122632 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/apiserver.key.39ec7326
	I0819 12:47:23.742456 2122632 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/proxy-client.key
	I0819 12:47:23.742468 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 12:47:23.742482 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 12:47:23.742495 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 12:47:23.742505 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 12:47:23.742515 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 12:47:23.742529 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 12:47:23.742541 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 12:47:23.742551 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 12:47:23.742612 2122632 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 12:47:23.742640 2122632 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 12:47:23.742652 2122632 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:47:23.742674 2122632 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 12:47:23.742698 2122632 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:47:23.742720 2122632 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 12:47:23.742757 2122632 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:47:23.742784 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> /usr/share/ca-certificates/20941932.pem
	I0819 12:47:23.742797 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:47:23.742809 2122632 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem -> /usr/share/ca-certificates/2094193.pem
	I0819 12:47:23.743473 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:47:23.767807 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:47:23.791662 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:47:23.814007 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 12:47:23.839282 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 12:47:23.863141 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 12:47:23.886057 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:47:23.908154 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/multinode-172739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 12:47:23.931063 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 12:47:23.953654 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:47:23.976541 2122632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 12:47:23.999770 2122632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:47:24.015470 2122632 ssh_runner.go:195] Run: openssl version
	I0819 12:47:24.021002 2122632 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0819 12:47:24.021248 2122632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 12:47:24.032088 2122632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 12:47:24.036816 2122632 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 12:47:24.036844 2122632 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 12:47:24.036885 2122632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 12:47:24.042350 2122632 command_runner.go:130] > 3ec20f2e
	I0819 12:47:24.042596 2122632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:47:24.052753 2122632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:47:24.063878 2122632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:47:24.068612 2122632 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:47:24.068649 2122632 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:47:24.068705 2122632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:47:24.074458 2122632 command_runner.go:130] > b5213941
	I0819 12:47:24.074534 2122632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:47:24.084100 2122632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 12:47:24.095072 2122632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 12:47:24.099623 2122632 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 12:47:24.099646 2122632 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 12:47:24.099682 2122632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 12:47:24.105374 2122632 command_runner.go:130] > 51391683
	I0819 12:47:24.105426 2122632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 12:47:24.115460 2122632 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:47:24.120300 2122632 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:47:24.120322 2122632 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0819 12:47:24.120328 2122632 command_runner.go:130] > Device: 253,1	Inode: 9432598     Links: 1
	I0819 12:47:24.120335 2122632 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 12:47:24.120341 2122632 command_runner.go:130] > Access: 2024-08-19 12:40:47.553439181 +0000
	I0819 12:47:24.120345 2122632 command_runner.go:130] > Modify: 2024-08-19 12:40:47.553439181 +0000
	I0819 12:47:24.120350 2122632 command_runner.go:130] > Change: 2024-08-19 12:40:47.553439181 +0000
	I0819 12:47:24.120355 2122632 command_runner.go:130] >  Birth: 2024-08-19 12:40:47.553439181 +0000
	I0819 12:47:24.120410 2122632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 12:47:24.126498 2122632 command_runner.go:130] > Certificate will not expire
	I0819 12:47:24.126568 2122632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 12:47:24.132291 2122632 command_runner.go:130] > Certificate will not expire
	I0819 12:47:24.132357 2122632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 12:47:24.137891 2122632 command_runner.go:130] > Certificate will not expire
	I0819 12:47:24.138071 2122632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 12:47:24.143691 2122632 command_runner.go:130] > Certificate will not expire
	I0819 12:47:24.143926 2122632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 12:47:24.149226 2122632 command_runner.go:130] > Certificate will not expire
	I0819 12:47:24.149427 2122632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 12:47:24.154533 2122632 command_runner.go:130] > Certificate will not expire
	I0819 12:47:24.154726 2122632 kubeadm.go:392] StartCluster: {Name:multinode-172739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-172739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.217 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:47:24.154836 2122632 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:47:24.154887 2122632 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:47:24.196012 2122632 command_runner.go:130] > ef89dd32791ace6948049465b6fa9ff69f297da02f0e57ab2749972773f974b1
	I0819 12:47:24.196035 2122632 command_runner.go:130] > e79879fc5f7a855a646ccd2395da2bb8633972d08001e288a10f55de9441370e
	I0819 12:47:24.196041 2122632 command_runner.go:130] > 7375e5f48c3e1922e01c6a92d400b27c989817d0fa0650730ab31b9b0649ca60
	I0819 12:47:24.196048 2122632 command_runner.go:130] > 7793674a49e92acea9566bbb2e201eee3db34fe55161fcc7f16361b34bb892f2
	I0819 12:47:24.196053 2122632 command_runner.go:130] > 7d77d11ed2ed239056e8d2381737b2f837bafc1374a232f84fc945cb17f7cf34
	I0819 12:47:24.196059 2122632 command_runner.go:130] > 0aa9dce507ce827b16bdfa814e609b17fbcb211c68cedcb54c55ed58ba3c32f8
	I0819 12:47:24.196064 2122632 command_runner.go:130] > 1aa63e92b723d8c7ce76bde6151410872c79cc083284faeece6386610063d5c5
	I0819 12:47:24.196070 2122632 command_runner.go:130] > 0b1e3f91d8aef93204d65740971434e447791014a0c8ceb97b13cdd87bb6a81a
	I0819 12:47:24.196089 2122632 cri.go:89] found id: "ef89dd32791ace6948049465b6fa9ff69f297da02f0e57ab2749972773f974b1"
	I0819 12:47:24.196096 2122632 cri.go:89] found id: "e79879fc5f7a855a646ccd2395da2bb8633972d08001e288a10f55de9441370e"
	I0819 12:47:24.196098 2122632 cri.go:89] found id: "7375e5f48c3e1922e01c6a92d400b27c989817d0fa0650730ab31b9b0649ca60"
	I0819 12:47:24.196101 2122632 cri.go:89] found id: "7793674a49e92acea9566bbb2e201eee3db34fe55161fcc7f16361b34bb892f2"
	I0819 12:47:24.196105 2122632 cri.go:89] found id: "7d77d11ed2ed239056e8d2381737b2f837bafc1374a232f84fc945cb17f7cf34"
	I0819 12:47:24.196108 2122632 cri.go:89] found id: "0aa9dce507ce827b16bdfa814e609b17fbcb211c68cedcb54c55ed58ba3c32f8"
	I0819 12:47:24.196112 2122632 cri.go:89] found id: "1aa63e92b723d8c7ce76bde6151410872c79cc083284faeece6386610063d5c5"
	I0819 12:47:24.196114 2122632 cri.go:89] found id: "0b1e3f91d8aef93204d65740971434e447791014a0c8ceb97b13cdd87bb6a81a"
	I0819 12:47:24.196117 2122632 cri.go:89] found id: ""
	I0819 12:47:24.196183 2122632 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.640794075Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071892640771115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13e2ebba-2f36-4908-a875-70d771c2fb44 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.641456477Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f93b2456-c9ca-4af2-9760-fd5652144d9a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.641530191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f93b2456-c9ca-4af2-9760-fd5652144d9a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.641858632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:496c26d69901995c5e2231b98d05573ce313f39c875a7d294400ba35cfb2fc1f,PodSandboxId:8609851bb92819a66bcdbfbfccf4be489e7ea39c976d94a39a0898eb5f59225b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724071684501085758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f25641c5dfdeaca4e6af72814159890c9918206022279e7ee42004d2b026e8,PodSandboxId:92c9401559c45a85af60134f10d5cb479a3f10a7b1f46c560e91fb0b122aeb20,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724071651004730213,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ac994ca741441c4d9708cfa516e5f9af77a574bff2354d97fa3a811fbd805,PodSandboxId:d5f63d9663d016557990bfae142e1c40cc468e4b03cb7d7f76023b3489a8d61a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724071650827599812,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f91dcff6705ba3104b83628107b85b8fb0f68c565606cad42a7c8558444137b,PodSandboxId:dfec30fb619178aa1b8c5481ec83faf27aa1f02028a3216c2b906ef09e532f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724071650727807324,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f45a9788fa14f30429a3871fb5ff01c1f5b1850306cf3134ab9ec548d1f82f7c,PodSandboxId:90aea072eba447fc94ff5e110bf586fb063492754fb27b30a599f744c3c6b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724071650732810432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843131df4f0f074845c8458a5446a2d1c2104bae712abe9217b607bfc2b6502b,PodSandboxId:dfbe5dea075fc8cb7b86d66caa2911044f9b4aed80db2375365e8e718fcd6b20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724071646976303352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa230465a97a678916003722e53dd91f2f5e69d09ed2e1c5865d0e01e59da5ab,PodSandboxId:5547b42c380c3a66a04e3f38cc236430062a78c953b578cafd86e8f4ab0c9c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724071646944390716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8764e282807eac82774573f72ca4f5fe102a3f50de20300fde498db65f12945,PodSandboxId:958b3261f0964a2f792d05f976feffb383e3f7287ea314c8b44d385f622de8c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724071646940441880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a125261250c55,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57507b47037f0eb0bc4b3eff7191a5b232c1280600d3c7530d4478d3f586579c,PodSandboxId:2c552218a9055f5306794c85ef2e759c110b19823fa311e0375b517a345e9292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724071646854567972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def9c00aa12ed253534a300d71a36ada81123c6e70d89473e9fafca93b4b4170,PodSandboxId:657161da2edeb8fc288056ecddc3f84358aed7ab29d98300c15f714d982aa557,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724071328984594419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef89dd32791ace6948049465b6fa9ff69f297da02f0e57ab2749972773f974b1,PodSandboxId:0ca70924c14b8fb350aee188c1f707c96f8b17fbab6f257069b83ce995bcfef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724071275922279501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79879fc5f7a855a646ccd2395da2bb8633972d08001e288a10f55de9441370e,PodSandboxId:efacb57a7054f8e2e52e801e90833527dd047852ea29122eab8df71d466c2d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724071275881188549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7375e5f48c3e1922e01c6a92d400b27c989817d0fa0650730ab31b9b0649ca60,PodSandboxId:ccc35bddd76b97f0ec29b1d86b06c4bacc3e9e4d26fc455f583a9830be069fa1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724071263954702806,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7793674a49e92acea9566bbb2e201eee3db34fe55161fcc7f16361b34bb892f2,PodSandboxId:856dfb5095308d115b49a33477d891d338582db4a3c83f112507a432cdc8164c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724071263835767303,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa63e92b723d8c7ce76bde6151410872c79cc083284faeece6386610063d5c5,PodSandboxId:d90b496120c69d3d051d8e1608241ad24a491520f40b39f030fe085ba65078e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724071251148429212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d77d11ed2ed239056e8d2381737b2f837bafc1374a232f84fc945cb17f7cf34,PodSandboxId:7e51c80fc0d735dac0215f6993cd411e6ef2c37d4fc7fda7b4b641f441ae27a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724071251169075087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a
125261250c55,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aa9dce507ce827b16bdfa814e609b17fbcb211c68cedcb54c55ed58ba3c32f8,PodSandboxId:222b28c86955163db5fb227b33f303ab1c6490380342ae86a5a8a333105b1980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724071251149722561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1e3f91d8aef93204d65740971434e447791014a0c8ceb97b13cdd87bb6a81a,PodSandboxId:33f130edf5fccdd8125b2bb222b273b8ab5e225a859ff6cf5d7fb6e863c25a54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724071251111750204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f93b2456-c9ca-4af2-9760-fd5652144d9a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.687549227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f074a3ea-4902-4ee3-b4f0-fe3d24763b21 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.687651234Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f074a3ea-4902-4ee3-b4f0-fe3d24763b21 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.688805459Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db5849e7-720e-4c88-84d7-37305d8ff305 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.689382180Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071892689356547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db5849e7-720e-4c88-84d7-37305d8ff305 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.689952436Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71d79290-235a-4378-902f-6410463e8e65 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.690030563Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71d79290-235a-4378-902f-6410463e8e65 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.690478521Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:496c26d69901995c5e2231b98d05573ce313f39c875a7d294400ba35cfb2fc1f,PodSandboxId:8609851bb92819a66bcdbfbfccf4be489e7ea39c976d94a39a0898eb5f59225b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724071684501085758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f25641c5dfdeaca4e6af72814159890c9918206022279e7ee42004d2b026e8,PodSandboxId:92c9401559c45a85af60134f10d5cb479a3f10a7b1f46c560e91fb0b122aeb20,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724071651004730213,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ac994ca741441c4d9708cfa516e5f9af77a574bff2354d97fa3a811fbd805,PodSandboxId:d5f63d9663d016557990bfae142e1c40cc468e4b03cb7d7f76023b3489a8d61a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724071650827599812,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f91dcff6705ba3104b83628107b85b8fb0f68c565606cad42a7c8558444137b,PodSandboxId:dfec30fb619178aa1b8c5481ec83faf27aa1f02028a3216c2b906ef09e532f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724071650727807324,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f45a9788fa14f30429a3871fb5ff01c1f5b1850306cf3134ab9ec548d1f82f7c,PodSandboxId:90aea072eba447fc94ff5e110bf586fb063492754fb27b30a599f744c3c6b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724071650732810432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843131df4f0f074845c8458a5446a2d1c2104bae712abe9217b607bfc2b6502b,PodSandboxId:dfbe5dea075fc8cb7b86d66caa2911044f9b4aed80db2375365e8e718fcd6b20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724071646976303352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa230465a97a678916003722e53dd91f2f5e69d09ed2e1c5865d0e01e59da5ab,PodSandboxId:5547b42c380c3a66a04e3f38cc236430062a78c953b578cafd86e8f4ab0c9c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724071646944390716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8764e282807eac82774573f72ca4f5fe102a3f50de20300fde498db65f12945,PodSandboxId:958b3261f0964a2f792d05f976feffb383e3f7287ea314c8b44d385f622de8c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724071646940441880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a125261250c55,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57507b47037f0eb0bc4b3eff7191a5b232c1280600d3c7530d4478d3f586579c,PodSandboxId:2c552218a9055f5306794c85ef2e759c110b19823fa311e0375b517a345e9292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724071646854567972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def9c00aa12ed253534a300d71a36ada81123c6e70d89473e9fafca93b4b4170,PodSandboxId:657161da2edeb8fc288056ecddc3f84358aed7ab29d98300c15f714d982aa557,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724071328984594419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef89dd32791ace6948049465b6fa9ff69f297da02f0e57ab2749972773f974b1,PodSandboxId:0ca70924c14b8fb350aee188c1f707c96f8b17fbab6f257069b83ce995bcfef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724071275922279501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79879fc5f7a855a646ccd2395da2bb8633972d08001e288a10f55de9441370e,PodSandboxId:efacb57a7054f8e2e52e801e90833527dd047852ea29122eab8df71d466c2d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724071275881188549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7375e5f48c3e1922e01c6a92d400b27c989817d0fa0650730ab31b9b0649ca60,PodSandboxId:ccc35bddd76b97f0ec29b1d86b06c4bacc3e9e4d26fc455f583a9830be069fa1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724071263954702806,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7793674a49e92acea9566bbb2e201eee3db34fe55161fcc7f16361b34bb892f2,PodSandboxId:856dfb5095308d115b49a33477d891d338582db4a3c83f112507a432cdc8164c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724071263835767303,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa63e92b723d8c7ce76bde6151410872c79cc083284faeece6386610063d5c5,PodSandboxId:d90b496120c69d3d051d8e1608241ad24a491520f40b39f030fe085ba65078e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724071251148429212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d77d11ed2ed239056e8d2381737b2f837bafc1374a232f84fc945cb17f7cf34,PodSandboxId:7e51c80fc0d735dac0215f6993cd411e6ef2c37d4fc7fda7b4b641f441ae27a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724071251169075087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a
125261250c55,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aa9dce507ce827b16bdfa814e609b17fbcb211c68cedcb54c55ed58ba3c32f8,PodSandboxId:222b28c86955163db5fb227b33f303ab1c6490380342ae86a5a8a333105b1980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724071251149722561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1e3f91d8aef93204d65740971434e447791014a0c8ceb97b13cdd87bb6a81a,PodSandboxId:33f130edf5fccdd8125b2bb222b273b8ab5e225a859ff6cf5d7fb6e863c25a54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724071251111750204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=71d79290-235a-4378-902f-6410463e8e65 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.734057101Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27773f3a-c2fc-4430-a39c-99dc5650cdd3 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.734193502Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27773f3a-c2fc-4430-a39c-99dc5650cdd3 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.735553933Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49d6fc33-6569-4ae6-bf35-562195d218ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.735975295Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071892735954284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49d6fc33-6569-4ae6-bf35-562195d218ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.736502603Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0b132ca-5872-4ec3-b713-62f13045ad09 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.736554954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0b132ca-5872-4ec3-b713-62f13045ad09 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.736916157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:496c26d69901995c5e2231b98d05573ce313f39c875a7d294400ba35cfb2fc1f,PodSandboxId:8609851bb92819a66bcdbfbfccf4be489e7ea39c976d94a39a0898eb5f59225b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724071684501085758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f25641c5dfdeaca4e6af72814159890c9918206022279e7ee42004d2b026e8,PodSandboxId:92c9401559c45a85af60134f10d5cb479a3f10a7b1f46c560e91fb0b122aeb20,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724071651004730213,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ac994ca741441c4d9708cfa516e5f9af77a574bff2354d97fa3a811fbd805,PodSandboxId:d5f63d9663d016557990bfae142e1c40cc468e4b03cb7d7f76023b3489a8d61a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724071650827599812,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f91dcff6705ba3104b83628107b85b8fb0f68c565606cad42a7c8558444137b,PodSandboxId:dfec30fb619178aa1b8c5481ec83faf27aa1f02028a3216c2b906ef09e532f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724071650727807324,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f45a9788fa14f30429a3871fb5ff01c1f5b1850306cf3134ab9ec548d1f82f7c,PodSandboxId:90aea072eba447fc94ff5e110bf586fb063492754fb27b30a599f744c3c6b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724071650732810432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843131df4f0f074845c8458a5446a2d1c2104bae712abe9217b607bfc2b6502b,PodSandboxId:dfbe5dea075fc8cb7b86d66caa2911044f9b4aed80db2375365e8e718fcd6b20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724071646976303352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa230465a97a678916003722e53dd91f2f5e69d09ed2e1c5865d0e01e59da5ab,PodSandboxId:5547b42c380c3a66a04e3f38cc236430062a78c953b578cafd86e8f4ab0c9c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724071646944390716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8764e282807eac82774573f72ca4f5fe102a3f50de20300fde498db65f12945,PodSandboxId:958b3261f0964a2f792d05f976feffb383e3f7287ea314c8b44d385f622de8c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724071646940441880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a125261250c55,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57507b47037f0eb0bc4b3eff7191a5b232c1280600d3c7530d4478d3f586579c,PodSandboxId:2c552218a9055f5306794c85ef2e759c110b19823fa311e0375b517a345e9292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724071646854567972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def9c00aa12ed253534a300d71a36ada81123c6e70d89473e9fafca93b4b4170,PodSandboxId:657161da2edeb8fc288056ecddc3f84358aed7ab29d98300c15f714d982aa557,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724071328984594419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef89dd32791ace6948049465b6fa9ff69f297da02f0e57ab2749972773f974b1,PodSandboxId:0ca70924c14b8fb350aee188c1f707c96f8b17fbab6f257069b83ce995bcfef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724071275922279501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79879fc5f7a855a646ccd2395da2bb8633972d08001e288a10f55de9441370e,PodSandboxId:efacb57a7054f8e2e52e801e90833527dd047852ea29122eab8df71d466c2d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724071275881188549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7375e5f48c3e1922e01c6a92d400b27c989817d0fa0650730ab31b9b0649ca60,PodSandboxId:ccc35bddd76b97f0ec29b1d86b06c4bacc3e9e4d26fc455f583a9830be069fa1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724071263954702806,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7793674a49e92acea9566bbb2e201eee3db34fe55161fcc7f16361b34bb892f2,PodSandboxId:856dfb5095308d115b49a33477d891d338582db4a3c83f112507a432cdc8164c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724071263835767303,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa63e92b723d8c7ce76bde6151410872c79cc083284faeece6386610063d5c5,PodSandboxId:d90b496120c69d3d051d8e1608241ad24a491520f40b39f030fe085ba65078e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724071251148429212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d77d11ed2ed239056e8d2381737b2f837bafc1374a232f84fc945cb17f7cf34,PodSandboxId:7e51c80fc0d735dac0215f6993cd411e6ef2c37d4fc7fda7b4b641f441ae27a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724071251169075087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a
125261250c55,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aa9dce507ce827b16bdfa814e609b17fbcb211c68cedcb54c55ed58ba3c32f8,PodSandboxId:222b28c86955163db5fb227b33f303ab1c6490380342ae86a5a8a333105b1980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724071251149722561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1e3f91d8aef93204d65740971434e447791014a0c8ceb97b13cdd87bb6a81a,PodSandboxId:33f130edf5fccdd8125b2bb222b273b8ab5e225a859ff6cf5d7fb6e863c25a54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724071251111750204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0b132ca-5872-4ec3-b713-62f13045ad09 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.778312479Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bcc54d7a-bd59-4ec2-9447-7c5e9a44312b name=/runtime.v1.RuntimeService/Version
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.778386708Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bcc54d7a-bd59-4ec2-9447-7c5e9a44312b name=/runtime.v1.RuntimeService/Version
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.779294345Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d80f51ab-5716-4ec1-93fd-0dd03b2c3de3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.779714431Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071892779693364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d80f51ab-5716-4ec1-93fd-0dd03b2c3de3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.780266775Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=206b9cf8-5c34-46d5-887c-13efefbb4872 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.780317202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=206b9cf8-5c34-46d5-887c-13efefbb4872 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:51:32 multinode-172739 crio[2750]: time="2024-08-19 12:51:32.780655626Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:496c26d69901995c5e2231b98d05573ce313f39c875a7d294400ba35cfb2fc1f,PodSandboxId:8609851bb92819a66bcdbfbfccf4be489e7ea39c976d94a39a0898eb5f59225b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724071684501085758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f25641c5dfdeaca4e6af72814159890c9918206022279e7ee42004d2b026e8,PodSandboxId:92c9401559c45a85af60134f10d5cb479a3f10a7b1f46c560e91fb0b122aeb20,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724071651004730213,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ac994ca741441c4d9708cfa516e5f9af77a574bff2354d97fa3a811fbd805,PodSandboxId:d5f63d9663d016557990bfae142e1c40cc468e4b03cb7d7f76023b3489a8d61a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724071650827599812,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f91dcff6705ba3104b83628107b85b8fb0f68c565606cad42a7c8558444137b,PodSandboxId:dfec30fb619178aa1b8c5481ec83faf27aa1f02028a3216c2b906ef09e532f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724071650727807324,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f45a9788fa14f30429a3871fb5ff01c1f5b1850306cf3134ab9ec548d1f82f7c,PodSandboxId:90aea072eba447fc94ff5e110bf586fb063492754fb27b30a599f744c3c6b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724071650732810432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843131df4f0f074845c8458a5446a2d1c2104bae712abe9217b607bfc2b6502b,PodSandboxId:dfbe5dea075fc8cb7b86d66caa2911044f9b4aed80db2375365e8e718fcd6b20,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724071646976303352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa230465a97a678916003722e53dd91f2f5e69d09ed2e1c5865d0e01e59da5ab,PodSandboxId:5547b42c380c3a66a04e3f38cc236430062a78c953b578cafd86e8f4ab0c9c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724071646944390716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8764e282807eac82774573f72ca4f5fe102a3f50de20300fde498db65f12945,PodSandboxId:958b3261f0964a2f792d05f976feffb383e3f7287ea314c8b44d385f622de8c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724071646940441880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a125261250c55,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57507b47037f0eb0bc4b3eff7191a5b232c1280600d3c7530d4478d3f586579c,PodSandboxId:2c552218a9055f5306794c85ef2e759c110b19823fa311e0375b517a345e9292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724071646854567972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def9c00aa12ed253534a300d71a36ada81123c6e70d89473e9fafca93b4b4170,PodSandboxId:657161da2edeb8fc288056ecddc3f84358aed7ab29d98300c15f714d982aa557,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724071328984594419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-czkl5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77b4d7c5-4601-449e-943a-4bf0d53eb5c8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef89dd32791ace6948049465b6fa9ff69f297da02f0e57ab2749972773f974b1,PodSandboxId:0ca70924c14b8fb350aee188c1f707c96f8b17fbab6f257069b83ce995bcfef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724071275922279501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnmfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b25bb77-4aed-4c5e-a243-af62da06a2de,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79879fc5f7a855a646ccd2395da2bb8633972d08001e288a10f55de9441370e,PodSandboxId:efacb57a7054f8e2e52e801e90833527dd047852ea29122eab8df71d466c2d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724071275881188549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8494c463-5cd5-4267-a1ae-177810040a05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7375e5f48c3e1922e01c6a92d400b27c989817d0fa0650730ab31b9b0649ca60,PodSandboxId:ccc35bddd76b97f0ec29b1d86b06c4bacc3e9e4d26fc455f583a9830be069fa1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724071263954702806,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pc42l,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 039e4f40-0fd7-4cec-bdb4-77fb41f2c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7793674a49e92acea9566bbb2e201eee3db34fe55161fcc7f16361b34bb892f2,PodSandboxId:856dfb5095308d115b49a33477d891d338582db4a3c83f112507a432cdc8164c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724071263835767303,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwckd,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8610d0bc-e9c0-4a52-bce6-d7dc2afeb5c2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa63e92b723d8c7ce76bde6151410872c79cc083284faeece6386610063d5c5,PodSandboxId:d90b496120c69d3d051d8e1608241ad24a491520f40b39f030fe085ba65078e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724071251148429212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f95ba2d20ab2a954f95d491e841ccc
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d77d11ed2ed239056e8d2381737b2f837bafc1374a232f84fc945cb17f7cf34,PodSandboxId:7e51c80fc0d735dac0215f6993cd411e6ef2c37d4fc7fda7b4b641f441ae27a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724071251169075087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d876c9452dc834909a
125261250c55,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aa9dce507ce827b16bdfa814e609b17fbcb211c68cedcb54c55ed58ba3c32f8,PodSandboxId:222b28c86955163db5fb227b33f303ab1c6490380342ae86a5a8a333105b1980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724071251149722561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ea879b7d0ba1998b615821db262369,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1e3f91d8aef93204d65740971434e447791014a0c8ceb97b13cdd87bb6a81a,PodSandboxId:33f130edf5fccdd8125b2bb222b273b8ab5e225a859ff6cf5d7fb6e863c25a54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724071251111750204,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-172739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03c91913e812d97f83650df33a579b1,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=206b9cf8-5c34-46d5-887c-13efefbb4872 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	496c26d699019       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   8609851bb9281       busybox-7dff88458-czkl5
	35f25641c5dfd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   92c9401559c45       kindnet-pc42l
	543ac994ca741       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   d5f63d9663d01       coredns-6f6b679f8f-nnmfg
	f45a9788fa14f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   90aea072eba44       storage-provisioner
	9f91dcff6705b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   dfec30fb61917       kube-proxy-fwckd
	843131df4f0f0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   dfbe5dea075fc       kube-scheduler-multinode-172739
	aa230465a97a6       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   5547b42c380c3       kube-apiserver-multinode-172739
	e8764e282807e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   958b3261f0964       kube-controller-manager-multinode-172739
	57507b47037f0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   2c552218a9055       etcd-multinode-172739
	def9c00aa12ed       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   657161da2edeb       busybox-7dff88458-czkl5
	ef89dd32791ac       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   0ca70924c14b8       coredns-6f6b679f8f-nnmfg
	e79879fc5f7a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   efacb57a7054f       storage-provisioner
	7375e5f48c3e1       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   ccc35bddd76b9       kindnet-pc42l
	7793674a49e92       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   856dfb5095308       kube-proxy-fwckd
	7d77d11ed2ed2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   7e51c80fc0d73       kube-controller-manager-multinode-172739
	0aa9dce507ce8       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   222b28c869551       kube-apiserver-multinode-172739
	1aa63e92b723d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   d90b496120c69       etcd-multinode-172739
	0b1e3f91d8aef       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   33f130edf5fcc       kube-scheduler-multinode-172739
	
	
	==> coredns [543ac994ca741441c4d9708cfa516e5f9af77a574bff2354d97fa3a811fbd805] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55349 - 62834 "HINFO IN 6632884223335137357.1338848667664456225. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01124799s
	
	
	==> coredns [ef89dd32791ace6948049465b6fa9ff69f297da02f0e57ab2749972773f974b1] <==
	[INFO] 10.244.1.2:59315 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001647449s
	[INFO] 10.244.1.2:55593 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116989s
	[INFO] 10.244.1.2:56425 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081028s
	[INFO] 10.244.1.2:52745 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001198701s
	[INFO] 10.244.1.2:53720 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006822s
	[INFO] 10.244.1.2:49332 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086004s
	[INFO] 10.244.1.2:59109 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067783s
	[INFO] 10.244.0.3:58222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112986s
	[INFO] 10.244.0.3:33297 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107664s
	[INFO] 10.244.0.3:49911 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048819s
	[INFO] 10.244.0.3:43193 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064409s
	[INFO] 10.244.1.2:45863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178747s
	[INFO] 10.244.1.2:59721 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154674s
	[INFO] 10.244.1.2:37933 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113763s
	[INFO] 10.244.1.2:52137 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133666s
	[INFO] 10.244.0.3:48915 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118063s
	[INFO] 10.244.0.3:43620 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00015968s
	[INFO] 10.244.0.3:50159 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094508s
	[INFO] 10.244.0.3:51565 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000067737s
	[INFO] 10.244.1.2:51214 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129447s
	[INFO] 10.244.1.2:47491 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001125s
	[INFO] 10.244.1.2:56930 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000066769s
	[INFO] 10.244.1.2:48361 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077915s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-172739
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-172739
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=multinode-172739
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T12_40_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:40:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-172739
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:51:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:47:30 +0000   Mon, 19 Aug 2024 12:40:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:47:30 +0000   Mon, 19 Aug 2024 12:40:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:47:30 +0000   Mon, 19 Aug 2024 12:40:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:47:30 +0000   Mon, 19 Aug 2024 12:41:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    multinode-172739
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3eef8608da994e1295670ed1689c13f1
	  System UUID:                3eef8608-da99-4e12-9567-0ed1689c13f1
	  Boot ID:                    4efaa1e9-af67-4dcb-9d07-acc7c11cdfca
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-czkl5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	  kube-system                 coredns-6f6b679f8f-nnmfg                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-172739                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-pc42l                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-172739             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-172739    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-fwckd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-172739             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-172739 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-172739 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-172739 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-172739 event: Registered Node multinode-172739 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-172739 status is now: NodeReady
	  Normal  Starting                 4m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node multinode-172739 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node multinode-172739 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node multinode-172739 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node multinode-172739 event: Registered Node multinode-172739 in Controller
	
	
	Name:               multinode-172739-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-172739-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=multinode-172739
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_48_13_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:48:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-172739-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:49:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 12:48:43 +0000   Mon, 19 Aug 2024 12:49:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 12:48:43 +0000   Mon, 19 Aug 2024 12:49:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 12:48:43 +0000   Mon, 19 Aug 2024 12:49:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 12:48:43 +0000   Mon, 19 Aug 2024 12:49:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    multinode-172739-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5e74e0080be4d24abf93f8d5486aaef
	  System UUID:                c5e74e00-80be-4d24-abf9-3f8d5486aaef
	  Boot ID:                    6fa2cf42-2cb3-477a-b810-4dac77700fa0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ks8nn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m25s
	  kube-system                 kindnet-vxlzj              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m47s
	  kube-system                 kube-proxy-n6npn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m16s                  kube-proxy       
	  Normal  Starting                 9m42s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  9m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m47s (x2 over 9m48s)  kubelet          Node multinode-172739-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m47s (x2 over 9m48s)  kubelet          Node multinode-172739-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m47s (x2 over 9m48s)  kubelet          Node multinode-172739-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m29s                  kubelet          Node multinode-172739-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m21s (x2 over 3m21s)  kubelet          Node multinode-172739-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m21s (x2 over 3m21s)  kubelet          Node multinode-172739-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m21s (x2 over 3m21s)  kubelet          Node multinode-172739-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m4s                   kubelet          Node multinode-172739-m02 status is now: NodeReady
	  Normal  NodeNotReady             105s                   node-controller  Node multinode-172739-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.064117] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062025] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.186161] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.120603] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.261815] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.936231] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +3.811228] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.065522] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.008447] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[  +0.087891] kauditd_printk_skb: 69 callbacks suppressed
	[Aug19 12:41] systemd-fstab-generator[1337]: Ignoring "noauto" option for root device
	[  +0.091712] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.950459] kauditd_printk_skb: 69 callbacks suppressed
	[Aug19 12:42] kauditd_printk_skb: 12 callbacks suppressed
	[Aug19 12:47] systemd-fstab-generator[2668]: Ignoring "noauto" option for root device
	[  +0.143262] systemd-fstab-generator[2680]: Ignoring "noauto" option for root device
	[  +0.164081] systemd-fstab-generator[2694]: Ignoring "noauto" option for root device
	[  +0.132101] systemd-fstab-generator[2706]: Ignoring "noauto" option for root device
	[  +0.277336] systemd-fstab-generator[2734]: Ignoring "noauto" option for root device
	[  +0.703570] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +2.314565] systemd-fstab-generator[2960]: Ignoring "noauto" option for root device
	[  +4.695490] kauditd_printk_skb: 184 callbacks suppressed
	[ +17.204099] systemd-fstab-generator[3814]: Ignoring "noauto" option for root device
	[  +0.092148] kauditd_printk_skb: 34 callbacks suppressed
	[Aug19 12:48] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [1aa63e92b723d8c7ce76bde6151410872c79cc083284faeece6386610063d5c5] <==
	{"level":"info","ts":"2024-08-19T12:40:52.288346Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:40:52.290211Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"78cc5c67b96828b5","local-member-id":"f21a8e08563785d2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:40:52.290353Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:40:52.290406Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:40:52.290437Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:40:52.291302Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:40:52.293873Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.191:2379"}
	{"level":"info","ts":"2024-08-19T12:40:52.291647Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:40:52.296842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T12:40:52.291674Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T12:40:52.296988Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T12:41:10.694655Z","caller":"traceutil/trace.go:171","msg":"trace[999557054] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"124.245816ms","start":"2024-08-19T12:41:10.570385Z","end":"2024-08-19T12:41:10.694631Z","steps":["trace[999557054] 'process raft request'  (duration: 124.143859ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T12:41:53.292537Z","caller":"traceutil/trace.go:171","msg":"trace[384693846] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"133.15129ms","start":"2024-08-19T12:41:53.159364Z","end":"2024-08-19T12:41:53.292515Z","steps":["trace[384693846] 'process raft request'  (duration: 132.926691ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:42:39.494506Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.507186ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-172739-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T12:42:39.494738Z","caller":"traceutil/trace.go:171","msg":"trace[1924394683] range","detail":"{range_begin:/registry/minions/multinode-172739-m03; range_end:; response_count:0; response_revision:574; }","duration":"149.821887ms","start":"2024-08-19T12:42:39.344902Z","end":"2024-08-19T12:42:39.494724Z","steps":["trace[1924394683] 'range keys from in-memory index tree'  (duration: 149.447353ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T12:45:50.935209Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T12:45:50.935375Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-172739","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.191:2380"],"advertise-client-urls":["https://192.168.39.191:2379"]}
	{"level":"warn","ts":"2024-08-19T12:45:50.935465Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:45:50.935561Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:45:51.023615Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.191:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:45:51.023721Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.191:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T12:45:51.025167Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f21a8e08563785d2","current-leader-member-id":"f21a8e08563785d2"}
	{"level":"info","ts":"2024-08-19T12:45:51.028031Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.191:2380"}
	{"level":"info","ts":"2024-08-19T12:45:51.028263Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.191:2380"}
	{"level":"info","ts":"2024-08-19T12:45:51.028297Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-172739","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.191:2380"],"advertise-client-urls":["https://192.168.39.191:2379"]}
	
	
	==> etcd [57507b47037f0eb0bc4b3eff7191a5b232c1280600d3c7530d4478d3f586579c] <==
	{"level":"info","ts":"2024-08-19T12:47:27.267374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 switched to configuration voters=(17445412273030399442)"}
	{"level":"info","ts":"2024-08-19T12:47:27.289368Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"78cc5c67b96828b5","local-member-id":"f21a8e08563785d2","added-peer-id":"f21a8e08563785d2","added-peer-peer-urls":["https://192.168.39.191:2380"]}
	{"level":"info","ts":"2024-08-19T12:47:27.289483Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"78cc5c67b96828b5","local-member-id":"f21a8e08563785d2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:47:27.289513Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:47:27.299878Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T12:47:27.300140Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f21a8e08563785d2","initial-advertise-peer-urls":["https://192.168.39.191:2380"],"listen-peer-urls":["https://192.168.39.191:2380"],"advertise-client-urls":["https://192.168.39.191:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.191:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T12:47:27.300165Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T12:47:27.300224Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.191:2380"}
	{"level":"info","ts":"2024-08-19T12:47:27.300230Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.191:2380"}
	{"level":"info","ts":"2024-08-19T12:47:28.305179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T12:47:28.305285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T12:47:28.305346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 received MsgPreVoteResp from f21a8e08563785d2 at term 2"}
	{"level":"info","ts":"2024-08-19T12:47:28.305381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T12:47:28.305405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 received MsgVoteResp from f21a8e08563785d2 at term 3"}
	{"level":"info","ts":"2024-08-19T12:47:28.305432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T12:47:28.305457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f21a8e08563785d2 elected leader f21a8e08563785d2 at term 3"}
	{"level":"info","ts":"2024-08-19T12:47:28.310330Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f21a8e08563785d2","local-member-attributes":"{Name:multinode-172739 ClientURLs:[https://192.168.39.191:2379]}","request-path":"/0/members/f21a8e08563785d2/attributes","cluster-id":"78cc5c67b96828b5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T12:47:28.310640Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:47:28.311062Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:47:28.315009Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:47:28.329976Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.191:2379"}
	{"level":"info","ts":"2024-08-19T12:47:28.330541Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:47:28.315137Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T12:47:28.331183Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T12:47:28.343359Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:51:33 up 11 min,  0 users,  load average: 0.10, 0.14, 0.09
	Linux multinode-172739 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [35f25641c5dfdeaca4e6af72814159890c9918206022279e7ee42004d2b026e8] <==
	I0819 12:50:31.957193       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:50:41.963316       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:50:41.963380       1 main.go:299] handling current node
	I0819 12:50:41.963397       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:50:41.963403       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:50:51.957066       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:50:51.957248       1 main.go:299] handling current node
	I0819 12:50:51.957282       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:50:51.957302       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:51:01.961541       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:51:01.961589       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:51:01.961716       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:51:01.961741       1 main.go:299] handling current node
	I0819 12:51:11.959621       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:51:11.959732       1 main.go:299] handling current node
	I0819 12:51:11.959780       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:51:11.959788       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:51:21.965218       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:51:21.965278       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:51:21.965436       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:51:21.965473       1 main.go:299] handling current node
	I0819 12:51:31.957226       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:51:31.957340       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:51:31.957508       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:51:31.957532       1 main.go:299] handling current node
	
	
	==> kindnet [7375e5f48c3e1922e01c6a92d400b27c989817d0fa0650730ab31b9b0649ca60] <==
	I0819 12:45:05.050012       1 main.go:322] Node multinode-172739-m03 has CIDR [10.244.3.0/24] 
	I0819 12:45:15.058296       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:45:15.058437       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:45:15.058603       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0819 12:45:15.058627       1 main.go:322] Node multinode-172739-m03 has CIDR [10.244.3.0/24] 
	I0819 12:45:15.058718       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:45:15.058738       1 main.go:299] handling current node
	I0819 12:45:25.058485       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:45:25.058667       1 main.go:299] handling current node
	I0819 12:45:25.058719       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:45:25.058740       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:45:25.058916       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0819 12:45:25.058939       1 main.go:322] Node multinode-172739-m03 has CIDR [10.244.3.0/24] 
	I0819 12:45:35.056477       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:45:35.056586       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:45:35.056746       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0819 12:45:35.056768       1 main.go:322] Node multinode-172739-m03 has CIDR [10.244.3.0/24] 
	I0819 12:45:35.056844       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:45:35.056876       1 main.go:299] handling current node
	I0819 12:45:45.055090       1 main.go:295] Handling node with IPs: map[192.168.39.191:{}]
	I0819 12:45:45.055185       1 main.go:299] handling current node
	I0819 12:45:45.055220       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0819 12:45:45.055228       1 main.go:322] Node multinode-172739-m02 has CIDR [10.244.1.0/24] 
	I0819 12:45:45.055449       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0819 12:45:45.055474       1 main.go:322] Node multinode-172739-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0aa9dce507ce827b16bdfa814e609b17fbcb211c68cedcb54c55ed58ba3c32f8] <==
	I0819 12:40:55.196917       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 12:40:55.294549       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0819 12:40:55.302367       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.191]
	I0819 12:40:55.303281       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 12:40:55.310973       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 12:40:55.654242       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 12:40:56.354515       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 12:40:56.385328       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 12:40:56.406435       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 12:41:01.256438       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0819 12:41:01.361234       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0819 12:42:10.050557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51218: use of closed network connection
	E0819 12:42:10.238795       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51244: use of closed network connection
	E0819 12:42:10.434055       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51262: use of closed network connection
	E0819 12:42:10.612036       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51284: use of closed network connection
	E0819 12:42:10.785470       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51300: use of closed network connection
	E0819 12:42:10.951337       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51320: use of closed network connection
	E0819 12:42:11.232000       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51350: use of closed network connection
	E0819 12:42:11.407675       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51368: use of closed network connection
	E0819 12:42:11.572397       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:51388: use of closed network connection
	E0819 12:42:11.748559       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8443->192.168.39.1:55608: use of closed network connection
	http2: server: error reading preface from client 192.168.39.217:48318: read tcp 192.168.39.191:8443->192.168.39.217:48318: read: connection reset by peer
	I0819 12:45:50.922428       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0819 12:45:50.952639       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0819 12:45:50.954738       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-apiserver [aa230465a97a678916003722e53dd91f2f5e69d09ed2e1c5865d0e01e59da5ab] <==
	I0819 12:47:29.948818       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 12:47:29.949687       1 policy_source.go:224] refreshing policies
	I0819 12:47:29.958341       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 12:47:30.007882       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 12:47:30.011740       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 12:47:30.011803       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 12:47:30.012405       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 12:47:30.014017       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 12:47:30.014428       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 12:47:30.014468       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 12:47:30.021662       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 12:47:30.021684       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 12:47:30.028561       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 12:47:30.028588       1 aggregator.go:171] initial CRD sync complete...
	I0819 12:47:30.028605       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 12:47:30.028610       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 12:47:30.028614       1 cache.go:39] Caches are synced for autoregister controller
	I0819 12:47:30.815928       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 12:47:32.139830       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 12:47:32.270995       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 12:47:32.287725       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 12:47:32.359783       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 12:47:32.366582       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 12:47:33.290669       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 12:47:33.590001       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7d77d11ed2ed239056e8d2381737b2f837bafc1374a232f84fc945cb17f7cf34] <==
	I0819 12:43:26.541083       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-172739-m02"
	I0819 12:43:26.542350       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:27.710676       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-172739-m03\" does not exist"
	I0819 12:43:27.710917       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-172739-m02"
	I0819 12:43:27.721839       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-172739-m03" podCIDRs=["10.244.3.0/24"]
	I0819 12:43:27.721878       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:27.721902       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:27.737214       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:28.067391       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:28.404825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:30.501937       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:37.931496       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:45.341833       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-172739-m03"
	I0819 12:43:45.342600       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:45.354416       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:43:45.448741       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:44:30.468039       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-172739-m02"
	I0819 12:44:30.468430       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:44:30.471564       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m02"
	I0819 12:44:30.494986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:44:30.502542       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m02"
	I0819 12:44:30.547840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.131036ms"
	I0819 12:44:30.547931       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.165µs"
	I0819 12:44:35.627267       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m02"
	I0819 12:44:45.705478       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	
	
	==> kube-controller-manager [e8764e282807eac82774573f72ca4f5fe102a3f50de20300fde498db65f12945] <==
	I0819 12:48:48.792544       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-172739-m03" podCIDRs=["10.244.2.0/24"]
	I0819 12:48:48.792587       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:48:48.792612       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:48:48.797410       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:48:49.088226       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:48:49.428585       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:48:53.582752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:48:59.182243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:49:06.432629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:49:06.432739       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-172739-m02"
	I0819 12:49:06.451521       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:49:08.498182       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:49:11.037850       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:49:11.056243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:49:11.612602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m03"
	I0819 12:49:11.612726       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-172739-m02"
	I0819 12:49:48.515433       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m02"
	I0819 12:49:48.532611       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m02"
	I0819 12:49:48.542288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.040615ms"
	I0819 12:49:48.542386       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.403µs"
	I0819 12:49:53.256310       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-bxt54"
	I0819 12:49:53.286319       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-bxt54"
	I0819 12:49:53.286371       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-x5kdg"
	I0819 12:49:53.335381       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-x5kdg"
	I0819 12:49:53.584026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-172739-m02"
	
	
	==> kube-proxy [7793674a49e92acea9566bbb2e201eee3db34fe55161fcc7f16361b34bb892f2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:41:04.067387       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 12:41:04.084747       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.191"]
	E0819 12:41:04.084834       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:41:04.124609       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 12:41:04.124716       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 12:41:04.124757       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:41:04.127379       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:41:04.127703       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:41:04.127733       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:41:04.130522       1 config.go:197] "Starting service config controller"
	I0819 12:41:04.130554       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:41:04.130576       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:41:04.130580       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:41:04.133826       1 config.go:326] "Starting node config controller"
	I0819 12:41:04.133874       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:41:04.231478       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 12:41:04.231640       1 shared_informer.go:320] Caches are synced for service config
	I0819 12:41:04.234692       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9f91dcff6705ba3104b83628107b85b8fb0f68c565606cad42a7c8558444137b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:47:31.132224       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 12:47:31.141520       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.191"]
	E0819 12:47:31.141602       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:47:31.231956       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 12:47:31.232003       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 12:47:31.232032       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:47:31.234594       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:47:31.234835       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:47:31.234867       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:47:31.236734       1 config.go:197] "Starting service config controller"
	I0819 12:47:31.236777       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:47:31.236798       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:47:31.236802       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:47:31.237437       1 config.go:326] "Starting node config controller"
	I0819 12:47:31.237467       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:47:31.337182       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 12:47:31.337235       1 shared_informer.go:320] Caches are synced for service config
	I0819 12:47:31.337570       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0b1e3f91d8aef93204d65740971434e447791014a0c8ceb97b13cdd87bb6a81a] <==
	W0819 12:40:53.703997       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 12:40:53.704030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:53.704215       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 12:40:53.704304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:53.704420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 12:40:53.704502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:53.705822       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 12:40:53.707215       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 12:40:54.518368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 12:40:54.518404       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:54.605225       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 12:40:54.605277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:54.620909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 12:40:54.621211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:54.633011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 12:40:54.633167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:54.637083       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 12:40:54.637212       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:54.934722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 12:40:54.935273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:40:55.080419       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 12:40:55.080602       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 12:40:57.568982       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 12:45:50.932772       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0819 12:45:50.935537       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [843131df4f0f074845c8458a5446a2d1c2104bae712abe9217b607bfc2b6502b] <==
	I0819 12:47:27.721234       1 serving.go:386] Generated self-signed cert in-memory
	W0819 12:47:29.928748       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 12:47:29.928797       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 12:47:29.928809       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 12:47:29.928822       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 12:47:29.972339       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 12:47:29.972403       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:47:29.974562       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 12:47:29.974755       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 12:47:29.974802       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 12:47:29.974826       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 12:47:30.075330       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 12:50:16 multinode-172739 kubelet[2967]: E0819 12:50:16.280565    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071816280080076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:50:26 multinode-172739 kubelet[2967]: E0819 12:50:26.267733    2967 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 12:50:26 multinode-172739 kubelet[2967]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 12:50:26 multinode-172739 kubelet[2967]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 12:50:26 multinode-172739 kubelet[2967]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 12:50:26 multinode-172739 kubelet[2967]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 12:50:26 multinode-172739 kubelet[2967]: E0819 12:50:26.281932    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071826281632490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:50:26 multinode-172739 kubelet[2967]: E0819 12:50:26.281973    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071826281632490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:50:36 multinode-172739 kubelet[2967]: E0819 12:50:36.283510    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071836283077645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:50:36 multinode-172739 kubelet[2967]: E0819 12:50:36.283551    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071836283077645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:50:46 multinode-172739 kubelet[2967]: E0819 12:50:46.284971    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071846284595880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:50:46 multinode-172739 kubelet[2967]: E0819 12:50:46.285464    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071846284595880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:50:56 multinode-172739 kubelet[2967]: E0819 12:50:56.287725    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071856287236176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:50:56 multinode-172739 kubelet[2967]: E0819 12:50:56.287773    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071856287236176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:51:06 multinode-172739 kubelet[2967]: E0819 12:51:06.290782    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071866290164084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:51:06 multinode-172739 kubelet[2967]: E0819 12:51:06.291278    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071866290164084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:51:16 multinode-172739 kubelet[2967]: E0819 12:51:16.293653    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071876293294880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:51:16 multinode-172739 kubelet[2967]: E0819 12:51:16.293955    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071876293294880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:51:26 multinode-172739 kubelet[2967]: E0819 12:51:26.268161    2967 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 12:51:26 multinode-172739 kubelet[2967]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 12:51:26 multinode-172739 kubelet[2967]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 12:51:26 multinode-172739 kubelet[2967]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 12:51:26 multinode-172739 kubelet[2967]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 12:51:26 multinode-172739 kubelet[2967]: E0819 12:51:26.296168    2967 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071886295648871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:51:26 multinode-172739 kubelet[2967]: E0819 12:51:26.296199    2967 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724071886295648871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 12:51:32.351147 2124548 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19479-2086935/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-172739 -n multinode-172739
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-172739 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.38s)

                                                
                                    
x
+
TestPreload (170.97s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-873920 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0819 12:56:26.668504 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-873920 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m36.756087943s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-873920 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-873920 image pull gcr.io/k8s-minikube/busybox: (1.048215723s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-873920
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-873920: (8.291585523s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-873920 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-873920 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.949769499s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-873920 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-08-19 12:58:15.31308474 +0000 UTC m=+3695.947875034
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-873920 -n test-preload-873920
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-873920 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-873920 logs -n 25: (1.059281018s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n multinode-172739 sudo cat                                       | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | /home/docker/cp-test_multinode-172739-m03_multinode-172739.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-172739 cp multinode-172739-m03:/home/docker/cp-test.txt                       | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m02:/home/docker/cp-test_multinode-172739-m03_multinode-172739-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n                                                                 | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | multinode-172739-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-172739 ssh -n multinode-172739-m02 sudo cat                                   | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | /home/docker/cp-test_multinode-172739-m03_multinode-172739-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-172739 node stop m03                                                          | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	| node    | multinode-172739 node start                                                             | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC | 19 Aug 24 12:43 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-172739                                                                | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC |                     |
	| stop    | -p multinode-172739                                                                     | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:43 UTC |                     |
	| start   | -p multinode-172739                                                                     | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:45 UTC | 19 Aug 24 12:49 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-172739                                                                | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:49 UTC |                     |
	| node    | multinode-172739 node delete                                                            | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:49 UTC | 19 Aug 24 12:49 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-172739 stop                                                                   | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:49 UTC |                     |
	| start   | -p multinode-172739                                                                     | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:51 UTC | 19 Aug 24 12:54 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-172739                                                                | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:54 UTC |                     |
	| start   | -p multinode-172739-m02                                                                 | multinode-172739-m02 | jenkins | v1.33.1 | 19 Aug 24 12:54 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-172739-m03                                                                 | multinode-172739-m03 | jenkins | v1.33.1 | 19 Aug 24 12:54 UTC | 19 Aug 24 12:55 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-172739                                                                 | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:55 UTC |                     |
	| delete  | -p multinode-172739-m03                                                                 | multinode-172739-m03 | jenkins | v1.33.1 | 19 Aug 24 12:55 UTC | 19 Aug 24 12:55 UTC |
	| delete  | -p multinode-172739                                                                     | multinode-172739     | jenkins | v1.33.1 | 19 Aug 24 12:55 UTC | 19 Aug 24 12:55 UTC |
	| start   | -p test-preload-873920                                                                  | test-preload-873920  | jenkins | v1.33.1 | 19 Aug 24 12:55 UTC | 19 Aug 24 12:57 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-873920 image pull                                                          | test-preload-873920  | jenkins | v1.33.1 | 19 Aug 24 12:57 UTC | 19 Aug 24 12:57 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-873920                                                                  | test-preload-873920  | jenkins | v1.33.1 | 19 Aug 24 12:57 UTC | 19 Aug 24 12:57 UTC |
	| start   | -p test-preload-873920                                                                  | test-preload-873920  | jenkins | v1.33.1 | 19 Aug 24 12:57 UTC | 19 Aug 24 12:58 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-873920 image list                                                          | test-preload-873920  | jenkins | v1.33.1 | 19 Aug 24 12:58 UTC | 19 Aug 24 12:58 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:57:13
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:57:13.181422 2126975 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:57:13.181687 2126975 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:57:13.181696 2126975 out.go:358] Setting ErrFile to fd 2...
	I0819 12:57:13.181700 2126975 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:57:13.181867 2126975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:57:13.182404 2126975 out.go:352] Setting JSON to false
	I0819 12:57:13.183407 2126975 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":164383,"bootTime":1723907850,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:57:13.183467 2126975 start.go:139] virtualization: kvm guest
	I0819 12:57:13.185281 2126975 out.go:177] * [test-preload-873920] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:57:13.186359 2126975 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 12:57:13.186387 2126975 notify.go:220] Checking for updates...
	I0819 12:57:13.188363 2126975 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:57:13.189460 2126975 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:57:13.190383 2126975 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:57:13.191293 2126975 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:57:13.192250 2126975 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:57:13.193691 2126975 config.go:182] Loaded profile config "test-preload-873920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0819 12:57:13.194112 2126975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:57:13.194169 2126975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:57:13.209396 2126975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34845
	I0819 12:57:13.209926 2126975 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:57:13.210487 2126975 main.go:141] libmachine: Using API Version  1
	I0819 12:57:13.210511 2126975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:57:13.210862 2126975 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:57:13.211029 2126975 main.go:141] libmachine: (test-preload-873920) Calling .DriverName
	I0819 12:57:13.212576 2126975 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 12:57:13.213578 2126975 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:57:13.213857 2126975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:57:13.213888 2126975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:57:13.228365 2126975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41711
	I0819 12:57:13.228774 2126975 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:57:13.229212 2126975 main.go:141] libmachine: Using API Version  1
	I0819 12:57:13.229239 2126975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:57:13.229556 2126975 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:57:13.229731 2126975 main.go:141] libmachine: (test-preload-873920) Calling .DriverName
	I0819 12:57:13.263457 2126975 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 12:57:13.264539 2126975 start.go:297] selected driver: kvm2
	I0819 12:57:13.264552 2126975 start.go:901] validating driver "kvm2" against &{Name:test-preload-873920 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-873920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:57:13.264678 2126975 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:57:13.265369 2126975 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:57:13.265452 2126975 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 12:57:13.279889 2126975 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 12:57:13.280257 2126975 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:57:13.280328 2126975 cni.go:84] Creating CNI manager for ""
	I0819 12:57:13.280347 2126975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 12:57:13.280415 2126975 start.go:340] cluster config:
	{Name:test-preload-873920 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-873920 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:57:13.280540 2126975 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:57:13.282546 2126975 out.go:177] * Starting "test-preload-873920" primary control-plane node in "test-preload-873920" cluster
	I0819 12:57:13.283515 2126975 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0819 12:57:13.307539 2126975 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0819 12:57:13.307579 2126975 cache.go:56] Caching tarball of preloaded images
	I0819 12:57:13.307740 2126975 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0819 12:57:13.309203 2126975 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0819 12:57:13.310186 2126975 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0819 12:57:13.336883 2126975 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0819 12:57:16.623135 2126975 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0819 12:57:16.623282 2126975 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0819 12:57:17.493536 2126975 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0819 12:57:17.493674 2126975 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/test-preload-873920/config.json ...
	I0819 12:57:17.493899 2126975 start.go:360] acquireMachinesLock for test-preload-873920: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:57:17.493967 2126975 start.go:364] duration metric: took 45.492µs to acquireMachinesLock for "test-preload-873920"
	I0819 12:57:17.493984 2126975 start.go:96] Skipping create...Using existing machine configuration
	I0819 12:57:17.493990 2126975 fix.go:54] fixHost starting: 
	I0819 12:57:17.494309 2126975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:57:17.494343 2126975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:57:17.509349 2126975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35421
	I0819 12:57:17.509790 2126975 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:57:17.510328 2126975 main.go:141] libmachine: Using API Version  1
	I0819 12:57:17.510351 2126975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:57:17.510726 2126975 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:57:17.510920 2126975 main.go:141] libmachine: (test-preload-873920) Calling .DriverName
	I0819 12:57:17.511047 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetState
	I0819 12:57:17.512732 2126975 fix.go:112] recreateIfNeeded on test-preload-873920: state=Stopped err=<nil>
	I0819 12:57:17.512752 2126975 main.go:141] libmachine: (test-preload-873920) Calling .DriverName
	W0819 12:57:17.512904 2126975 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 12:57:17.514661 2126975 out.go:177] * Restarting existing kvm2 VM for "test-preload-873920" ...
	I0819 12:57:17.515613 2126975 main.go:141] libmachine: (test-preload-873920) Calling .Start
	I0819 12:57:17.515779 2126975 main.go:141] libmachine: (test-preload-873920) Ensuring networks are active...
	I0819 12:57:17.516538 2126975 main.go:141] libmachine: (test-preload-873920) Ensuring network default is active
	I0819 12:57:17.516824 2126975 main.go:141] libmachine: (test-preload-873920) Ensuring network mk-test-preload-873920 is active
	I0819 12:57:17.517223 2126975 main.go:141] libmachine: (test-preload-873920) Getting domain xml...
	I0819 12:57:17.517998 2126975 main.go:141] libmachine: (test-preload-873920) Creating domain...
	I0819 12:57:18.729613 2126975 main.go:141] libmachine: (test-preload-873920) Waiting to get IP...
	I0819 12:57:18.730714 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:18.731097 2126975 main.go:141] libmachine: (test-preload-873920) DBG | unable to find current IP address of domain test-preload-873920 in network mk-test-preload-873920
	I0819 12:57:18.731187 2126975 main.go:141] libmachine: (test-preload-873920) DBG | I0819 12:57:18.731093 2127027 retry.go:31] will retry after 218.907476ms: waiting for machine to come up
	I0819 12:57:18.951695 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:18.952112 2126975 main.go:141] libmachine: (test-preload-873920) DBG | unable to find current IP address of domain test-preload-873920 in network mk-test-preload-873920
	I0819 12:57:18.952146 2126975 main.go:141] libmachine: (test-preload-873920) DBG | I0819 12:57:18.952063 2127027 retry.go:31] will retry after 343.524858ms: waiting for machine to come up
	I0819 12:57:19.297830 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:19.298219 2126975 main.go:141] libmachine: (test-preload-873920) DBG | unable to find current IP address of domain test-preload-873920 in network mk-test-preload-873920
	I0819 12:57:19.298248 2126975 main.go:141] libmachine: (test-preload-873920) DBG | I0819 12:57:19.298171 2127027 retry.go:31] will retry after 395.933523ms: waiting for machine to come up
	I0819 12:57:19.695739 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:19.696206 2126975 main.go:141] libmachine: (test-preload-873920) DBG | unable to find current IP address of domain test-preload-873920 in network mk-test-preload-873920
	I0819 12:57:19.696238 2126975 main.go:141] libmachine: (test-preload-873920) DBG | I0819 12:57:19.696178 2127027 retry.go:31] will retry after 609.126216ms: waiting for machine to come up
	I0819 12:57:20.307048 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:20.307477 2126975 main.go:141] libmachine: (test-preload-873920) DBG | unable to find current IP address of domain test-preload-873920 in network mk-test-preload-873920
	I0819 12:57:20.307499 2126975 main.go:141] libmachine: (test-preload-873920) DBG | I0819 12:57:20.307444 2127027 retry.go:31] will retry after 502.183551ms: waiting for machine to come up
	I0819 12:57:20.811156 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:20.811605 2126975 main.go:141] libmachine: (test-preload-873920) DBG | unable to find current IP address of domain test-preload-873920 in network mk-test-preload-873920
	I0819 12:57:20.811637 2126975 main.go:141] libmachine: (test-preload-873920) DBG | I0819 12:57:20.811550 2127027 retry.go:31] will retry after 774.596376ms: waiting for machine to come up
	I0819 12:57:21.587450 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:21.587843 2126975 main.go:141] libmachine: (test-preload-873920) DBG | unable to find current IP address of domain test-preload-873920 in network mk-test-preload-873920
	I0819 12:57:21.587877 2126975 main.go:141] libmachine: (test-preload-873920) DBG | I0819 12:57:21.587783 2127027 retry.go:31] will retry after 826.416972ms: waiting for machine to come up
	I0819 12:57:22.415814 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:22.416318 2126975 main.go:141] libmachine: (test-preload-873920) DBG | unable to find current IP address of domain test-preload-873920 in network mk-test-preload-873920
	I0819 12:57:22.416343 2126975 main.go:141] libmachine: (test-preload-873920) DBG | I0819 12:57:22.416261 2127027 retry.go:31] will retry after 907.736389ms: waiting for machine to come up
	I0819 12:57:23.325196 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:23.325612 2126975 main.go:141] libmachine: (test-preload-873920) DBG | unable to find current IP address of domain test-preload-873920 in network mk-test-preload-873920
	I0819 12:57:23.325646 2126975 main.go:141] libmachine: (test-preload-873920) DBG | I0819 12:57:23.325579 2127027 retry.go:31] will retry after 1.359650424s: waiting for machine to come up
	I0819 12:57:24.687243 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:24.687631 2126975 main.go:141] libmachine: (test-preload-873920) DBG | unable to find current IP address of domain test-preload-873920 in network mk-test-preload-873920
	I0819 12:57:24.687663 2126975 main.go:141] libmachine: (test-preload-873920) DBG | I0819 12:57:24.687575 2127027 retry.go:31] will retry after 2.192349239s: waiting for machine to come up
	I0819 12:57:26.883294 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:26.883787 2126975 main.go:141] libmachine: (test-preload-873920) DBG | unable to find current IP address of domain test-preload-873920 in network mk-test-preload-873920
	I0819 12:57:26.883859 2126975 main.go:141] libmachine: (test-preload-873920) DBG | I0819 12:57:26.883778 2127027 retry.go:31] will retry after 2.523070092s: waiting for machine to come up
	I0819 12:57:29.408150 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:29.408653 2126975 main.go:141] libmachine: (test-preload-873920) DBG | unable to find current IP address of domain test-preload-873920 in network mk-test-preload-873920
	I0819 12:57:29.408683 2126975 main.go:141] libmachine: (test-preload-873920) DBG | I0819 12:57:29.408598 2127027 retry.go:31] will retry after 3.39722667s: waiting for machine to come up
	I0819 12:57:32.807634 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:32.808014 2126975 main.go:141] libmachine: (test-preload-873920) DBG | unable to find current IP address of domain test-preload-873920 in network mk-test-preload-873920
	I0819 12:57:32.808042 2126975 main.go:141] libmachine: (test-preload-873920) DBG | I0819 12:57:32.807990 2127027 retry.go:31] will retry after 3.673874395s: waiting for machine to come up
	I0819 12:57:36.485769 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.486141 2126975 main.go:141] libmachine: (test-preload-873920) Found IP for machine: 192.168.39.91
	I0819 12:57:36.486168 2126975 main.go:141] libmachine: (test-preload-873920) Reserving static IP address...
	I0819 12:57:36.486184 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has current primary IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.486618 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "test-preload-873920", mac: "52:54:00:b1:57:c9", ip: "192.168.39.91"} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:36.486649 2126975 main.go:141] libmachine: (test-preload-873920) Reserved static IP address: 192.168.39.91
	I0819 12:57:36.486667 2126975 main.go:141] libmachine: (test-preload-873920) DBG | skip adding static IP to network mk-test-preload-873920 - found existing host DHCP lease matching {name: "test-preload-873920", mac: "52:54:00:b1:57:c9", ip: "192.168.39.91"}
	I0819 12:57:36.486681 2126975 main.go:141] libmachine: (test-preload-873920) Waiting for SSH to be available...
	I0819 12:57:36.486697 2126975 main.go:141] libmachine: (test-preload-873920) DBG | Getting to WaitForSSH function...
	I0819 12:57:36.488678 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.489026 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:36.489057 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.489139 2126975 main.go:141] libmachine: (test-preload-873920) DBG | Using SSH client type: external
	I0819 12:57:36.489193 2126975 main.go:141] libmachine: (test-preload-873920) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/test-preload-873920/id_rsa (-rw-------)
	I0819 12:57:36.489224 2126975 main.go:141] libmachine: (test-preload-873920) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/test-preload-873920/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 12:57:36.489239 2126975 main.go:141] libmachine: (test-preload-873920) DBG | About to run SSH command:
	I0819 12:57:36.489247 2126975 main.go:141] libmachine: (test-preload-873920) DBG | exit 0
	I0819 12:57:36.616121 2126975 main.go:141] libmachine: (test-preload-873920) DBG | SSH cmd err, output: <nil>: 
	I0819 12:57:36.616564 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetConfigRaw
	I0819 12:57:36.617219 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetIP
	I0819 12:57:36.619704 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.620042 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:36.620072 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.620335 2126975 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/test-preload-873920/config.json ...
	I0819 12:57:36.620517 2126975 machine.go:93] provisionDockerMachine start ...
	I0819 12:57:36.620536 2126975 main.go:141] libmachine: (test-preload-873920) Calling .DriverName
	I0819 12:57:36.620758 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHHostname
	I0819 12:57:36.622989 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.623345 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:36.623376 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.623516 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHPort
	I0819 12:57:36.623706 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:57:36.623843 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:57:36.623949 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHUsername
	I0819 12:57:36.624074 2126975 main.go:141] libmachine: Using SSH client type: native
	I0819 12:57:36.624291 2126975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0819 12:57:36.624304 2126975 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 12:57:36.732130 2126975 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 12:57:36.732158 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetMachineName
	I0819 12:57:36.732446 2126975 buildroot.go:166] provisioning hostname "test-preload-873920"
	I0819 12:57:36.732476 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetMachineName
	I0819 12:57:36.732788 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHHostname
	I0819 12:57:36.735321 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.735683 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:36.735708 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.735815 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHPort
	I0819 12:57:36.735992 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:57:36.736176 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:57:36.736313 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHUsername
	I0819 12:57:36.736480 2126975 main.go:141] libmachine: Using SSH client type: native
	I0819 12:57:36.736641 2126975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0819 12:57:36.736653 2126975 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-873920 && echo "test-preload-873920" | sudo tee /etc/hostname
	I0819 12:57:36.858417 2126975 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-873920
	
	I0819 12:57:36.858451 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHHostname
	I0819 12:57:36.861399 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.861768 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:36.861803 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.861988 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHPort
	I0819 12:57:36.862184 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:57:36.862364 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:57:36.862513 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHUsername
	I0819 12:57:36.862693 2126975 main.go:141] libmachine: Using SSH client type: native
	I0819 12:57:36.862888 2126975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0819 12:57:36.862911 2126975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-873920' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-873920/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-873920' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:57:36.981484 2126975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:57:36.981518 2126975 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 12:57:36.981537 2126975 buildroot.go:174] setting up certificates
	I0819 12:57:36.981546 2126975 provision.go:84] configureAuth start
	I0819 12:57:36.981560 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetMachineName
	I0819 12:57:36.981926 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetIP
	I0819 12:57:36.984679 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.985003 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:36.985030 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.985143 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHHostname
	I0819 12:57:36.986984 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.987310 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:36.987346 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:36.987433 2126975 provision.go:143] copyHostCerts
	I0819 12:57:36.987498 2126975 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 12:57:36.987511 2126975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 12:57:36.987597 2126975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 12:57:36.987705 2126975 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 12:57:36.987716 2126975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 12:57:36.987756 2126975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 12:57:36.987843 2126975 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 12:57:36.987853 2126975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 12:57:36.987889 2126975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 12:57:36.988020 2126975 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.test-preload-873920 san=[127.0.0.1 192.168.39.91 localhost minikube test-preload-873920]
	I0819 12:57:37.138007 2126975 provision.go:177] copyRemoteCerts
	I0819 12:57:37.138083 2126975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:57:37.138123 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHHostname
	I0819 12:57:37.140984 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:37.141304 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:37.141330 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:37.141563 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHPort
	I0819 12:57:37.141761 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:57:37.141928 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHUsername
	I0819 12:57:37.142050 2126975 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/test-preload-873920/id_rsa Username:docker}
	I0819 12:57:37.226766 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 12:57:37.251259 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 12:57:37.275267 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 12:57:37.298576 2126975 provision.go:87] duration metric: took 317.015905ms to configureAuth
	I0819 12:57:37.298611 2126975 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:57:37.298796 2126975 config.go:182] Loaded profile config "test-preload-873920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0819 12:57:37.298882 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHHostname
	I0819 12:57:37.301612 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:37.302060 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:37.302093 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:37.302264 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHPort
	I0819 12:57:37.302453 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:57:37.302581 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:57:37.302713 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHUsername
	I0819 12:57:37.302879 2126975 main.go:141] libmachine: Using SSH client type: native
	I0819 12:57:37.303043 2126975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0819 12:57:37.303061 2126975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:57:37.561126 2126975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:57:37.561165 2126975 machine.go:96] duration metric: took 940.635008ms to provisionDockerMachine
	I0819 12:57:37.561178 2126975 start.go:293] postStartSetup for "test-preload-873920" (driver="kvm2")
	I0819 12:57:37.561190 2126975 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:57:37.561210 2126975 main.go:141] libmachine: (test-preload-873920) Calling .DriverName
	I0819 12:57:37.561561 2126975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:57:37.561605 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHHostname
	I0819 12:57:37.564236 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:37.564585 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:37.564627 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:37.564714 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHPort
	I0819 12:57:37.564917 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:57:37.565123 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHUsername
	I0819 12:57:37.565301 2126975 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/test-preload-873920/id_rsa Username:docker}
	I0819 12:57:37.651201 2126975 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:57:37.655767 2126975 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:57:37.655797 2126975 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 12:57:37.655897 2126975 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 12:57:37.655997 2126975 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 12:57:37.656099 2126975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:57:37.666308 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:57:37.689796 2126975 start.go:296] duration metric: took 128.604985ms for postStartSetup
	I0819 12:57:37.689842 2126975 fix.go:56] duration metric: took 20.195851667s for fixHost
	I0819 12:57:37.689868 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHHostname
	I0819 12:57:37.692327 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:37.692703 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:37.692740 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:37.692862 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHPort
	I0819 12:57:37.693070 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:57:37.693241 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:57:37.693367 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHUsername
	I0819 12:57:37.693531 2126975 main.go:141] libmachine: Using SSH client type: native
	I0819 12:57:37.693746 2126975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0819 12:57:37.693759 2126975 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:57:37.804710 2126975 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724072257.781128812
	
	I0819 12:57:37.804738 2126975 fix.go:216] guest clock: 1724072257.781128812
	I0819 12:57:37.804749 2126975 fix.go:229] Guest: 2024-08-19 12:57:37.781128812 +0000 UTC Remote: 2024-08-19 12:57:37.689847245 +0000 UTC m=+24.545361631 (delta=91.281567ms)
	I0819 12:57:37.804775 2126975 fix.go:200] guest clock delta is within tolerance: 91.281567ms
	I0819 12:57:37.804782 2126975 start.go:83] releasing machines lock for "test-preload-873920", held for 20.310804566s
	I0819 12:57:37.804807 2126975 main.go:141] libmachine: (test-preload-873920) Calling .DriverName
	I0819 12:57:37.805106 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetIP
	I0819 12:57:37.807444 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:37.807816 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:37.807844 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:37.807992 2126975 main.go:141] libmachine: (test-preload-873920) Calling .DriverName
	I0819 12:57:37.808604 2126975 main.go:141] libmachine: (test-preload-873920) Calling .DriverName
	I0819 12:57:37.808794 2126975 main.go:141] libmachine: (test-preload-873920) Calling .DriverName
	I0819 12:57:37.808893 2126975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:57:37.808947 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHHostname
	I0819 12:57:37.808989 2126975 ssh_runner.go:195] Run: cat /version.json
	I0819 12:57:37.809011 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHHostname
	I0819 12:57:37.811341 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:37.811591 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:37.811618 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:37.811638 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:37.811742 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHPort
	I0819 12:57:37.811915 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:57:37.812054 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHUsername
	I0819 12:57:37.812073 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:37.812097 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:37.812224 2126975 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/test-preload-873920/id_rsa Username:docker}
	I0819 12:57:37.812244 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHPort
	I0819 12:57:37.812388 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:57:37.812569 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHUsername
	I0819 12:57:37.812743 2126975 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/test-preload-873920/id_rsa Username:docker}
	I0819 12:57:37.916453 2126975 ssh_runner.go:195] Run: systemctl --version
	I0819 12:57:37.922553 2126975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:57:38.064749 2126975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:57:38.071498 2126975 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:57:38.071575 2126975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:57:38.087453 2126975 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 12:57:38.087478 2126975 start.go:495] detecting cgroup driver to use...
	I0819 12:57:38.087542 2126975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:57:38.103453 2126975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:57:38.117278 2126975 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:57:38.117341 2126975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:57:38.130435 2126975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:57:38.143771 2126975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:57:38.265564 2126975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:57:38.410060 2126975 docker.go:233] disabling docker service ...
	I0819 12:57:38.410131 2126975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:57:38.425753 2126975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:57:38.439239 2126975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:57:38.565324 2126975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:57:38.691025 2126975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:57:38.704905 2126975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:57:38.722715 2126975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0819 12:57:38.722777 2126975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:57:38.732517 2126975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:57:38.732585 2126975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:57:38.742517 2126975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:57:38.752520 2126975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:57:38.762621 2126975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:57:38.772812 2126975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:57:38.782610 2126975 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:57:38.799193 2126975 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:57:38.808963 2126975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:57:38.817736 2126975 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 12:57:38.817794 2126975 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 12:57:38.831063 2126975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:57:38.840784 2126975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:57:38.969624 2126975 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:57:39.109347 2126975 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:57:39.109427 2126975 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:57:39.114886 2126975 start.go:563] Will wait 60s for crictl version
	I0819 12:57:39.114935 2126975 ssh_runner.go:195] Run: which crictl
	I0819 12:57:39.118674 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:57:39.163518 2126975 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:57:39.163601 2126975 ssh_runner.go:195] Run: crio --version
	I0819 12:57:39.193804 2126975 ssh_runner.go:195] Run: crio --version
	I0819 12:57:39.227301 2126975 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0819 12:57:39.228393 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetIP
	I0819 12:57:39.231387 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:39.231684 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:57:39.231721 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:57:39.231956 2126975 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:57:39.236188 2126975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:57:39.251121 2126975 kubeadm.go:883] updating cluster {Name:test-preload-873920 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-873920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:57:39.251248 2126975 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0819 12:57:39.251319 2126975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:57:39.294957 2126975 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0819 12:57:39.295029 2126975 ssh_runner.go:195] Run: which lz4
	I0819 12:57:39.299225 2126975 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 12:57:39.303580 2126975 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 12:57:39.303617 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0819 12:57:40.826320 2126975 crio.go:462] duration metric: took 1.527124462s to copy over tarball
	I0819 12:57:40.826401 2126975 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 12:57:43.129260 2126975 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.30282715s)
	I0819 12:57:43.129292 2126975 crio.go:469] duration metric: took 2.302943097s to extract the tarball
	I0819 12:57:43.129310 2126975 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 12:57:43.170900 2126975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:57:43.220299 2126975 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0819 12:57:43.220328 2126975 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 12:57:43.220403 2126975 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:57:43.220419 2126975 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 12:57:43.220428 2126975 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 12:57:43.220460 2126975 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 12:57:43.220407 2126975 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 12:57:43.220497 2126975 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 12:57:43.220513 2126975 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 12:57:43.220539 2126975 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 12:57:43.221899 2126975 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:57:43.221908 2126975 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 12:57:43.221924 2126975 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 12:57:43.221911 2126975 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 12:57:43.221934 2126975 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 12:57:43.221908 2126975 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 12:57:43.221899 2126975 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 12:57:43.221958 2126975 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 12:57:43.388501 2126975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0819 12:57:43.393479 2126975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 12:57:43.397848 2126975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 12:57:43.399948 2126975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 12:57:43.409744 2126975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0819 12:57:43.416689 2126975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0819 12:57:43.440081 2126975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0819 12:57:43.494672 2126975 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0819 12:57:43.494728 2126975 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 12:57:43.494785 2126975 ssh_runner.go:195] Run: which crictl
	I0819 12:57:43.496061 2126975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:57:43.542482 2126975 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0819 12:57:43.542530 2126975 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 12:57:43.542575 2126975 ssh_runner.go:195] Run: which crictl
	I0819 12:57:43.542620 2126975 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0819 12:57:43.542665 2126975 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 12:57:43.542711 2126975 ssh_runner.go:195] Run: which crictl
	I0819 12:57:43.575037 2126975 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0819 12:57:43.575103 2126975 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 12:57:43.575150 2126975 ssh_runner.go:195] Run: which crictl
	I0819 12:57:43.587065 2126975 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0819 12:57:43.587107 2126975 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0819 12:57:43.587156 2126975 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0819 12:57:43.587176 2126975 ssh_runner.go:195] Run: which crictl
	I0819 12:57:43.587198 2126975 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 12:57:43.587248 2126975 ssh_runner.go:195] Run: which crictl
	I0819 12:57:43.602732 2126975 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0819 12:57:43.602781 2126975 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 12:57:43.602821 2126975 ssh_runner.go:195] Run: which crictl
	I0819 12:57:43.602822 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0819 12:57:43.708577 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 12:57:43.708685 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0819 12:57:43.708698 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 12:57:43.708725 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0819 12:57:43.708792 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0819 12:57:43.708866 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0819 12:57:43.708915 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0819 12:57:43.847224 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 12:57:43.862454 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0819 12:57:43.862509 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0819 12:57:43.862543 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0819 12:57:43.862621 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0819 12:57:43.862661 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 12:57:43.862764 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0819 12:57:43.930408 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 12:57:44.006871 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0819 12:57:44.020456 2126975 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0819 12:57:44.020573 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 12:57:44.020617 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0819 12:57:44.020673 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0819 12:57:44.020690 2126975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0819 12:57:44.020578 2126975 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0819 12:57:44.055732 2126975 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 12:57:44.055871 2126975 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 12:57:44.111848 2126975 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0819 12:57:44.112004 2126975 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0819 12:57:44.151710 2126975 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0819 12:57:44.151754 2126975 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0819 12:57:44.151773 2126975 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0819 12:57:44.151813 2126975 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0819 12:57:44.151827 2126975 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0819 12:57:44.151845 2126975 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0819 12:57:44.151760 2126975 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0819 12:57:44.151878 2126975 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 12:57:44.151896 2126975 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0819 12:57:44.151923 2126975 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0819 12:57:44.151964 2126975 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0819 12:57:44.151969 2126975 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0819 12:57:44.151975 2126975 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0819 12:57:47.920419 2126975 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (3.768566002s)
	I0819 12:57:47.920465 2126975 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0819 12:57:47.920494 2126975 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (3.768594099s)
	I0819 12:57:47.920544 2126975 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0819 12:57:47.920544 2126975 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.768554578s)
	I0819 12:57:47.920561 2126975 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0819 12:57:47.920500 2126975 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 12:57:47.920592 2126975 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.768599088s)
	I0819 12:57:47.920606 2126975 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0819 12:57:47.920625 2126975 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0819 12:57:47.920645 2126975 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.768776876s)
	I0819 12:57:47.920674 2126975 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0819 12:57:48.274177 2126975 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 12:57:48.274221 2126975 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0819 12:57:48.274286 2126975 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0819 12:57:50.422591 2126975 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.148269047s)
	I0819 12:57:50.422633 2126975 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0819 12:57:50.422674 2126975 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 12:57:50.422746 2126975 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0819 12:57:50.569068 2126975 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0819 12:57:50.569128 2126975 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0819 12:57:50.569190 2126975 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0819 12:57:51.313161 2126975 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0819 12:57:51.313222 2126975 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0819 12:57:51.313289 2126975 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0819 12:57:52.157644 2126975 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0819 12:57:52.157695 2126975 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0819 12:57:52.157758 2126975 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0819 12:57:52.902444 2126975 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0819 12:57:52.902497 2126975 cache_images.go:123] Successfully loaded all cached images
	I0819 12:57:52.902504 2126975 cache_images.go:92] duration metric: took 9.682164278s to LoadCachedImages
	I0819 12:57:52.902521 2126975 kubeadm.go:934] updating node { 192.168.39.91 8443 v1.24.4 crio true true} ...
	I0819 12:57:52.902668 2126975 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-873920 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-873920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:57:52.902766 2126975 ssh_runner.go:195] Run: crio config
	I0819 12:57:52.955383 2126975 cni.go:84] Creating CNI manager for ""
	I0819 12:57:52.955420 2126975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 12:57:52.955439 2126975 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:57:52.955473 2126975 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.91 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-873920 NodeName:test-preload-873920 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:57:52.955640 2126975 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-873920"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:57:52.955728 2126975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0819 12:57:52.966738 2126975 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:57:52.966805 2126975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 12:57:52.977219 2126975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0819 12:57:52.993655 2126975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:57:53.009622 2126975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0819 12:57:53.025793 2126975 ssh_runner.go:195] Run: grep 192.168.39.91	control-plane.minikube.internal$ /etc/hosts
	I0819 12:57:53.029645 2126975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:57:53.041854 2126975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:57:53.158657 2126975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:57:53.176509 2126975 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/test-preload-873920 for IP: 192.168.39.91
	I0819 12:57:53.176538 2126975 certs.go:194] generating shared ca certs ...
	I0819 12:57:53.176557 2126975 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:57:53.176724 2126975 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 12:57:53.176773 2126975 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 12:57:53.176787 2126975 certs.go:256] generating profile certs ...
	I0819 12:57:53.176896 2126975 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/test-preload-873920/client.key
	I0819 12:57:53.176975 2126975 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/test-preload-873920/apiserver.key.5b2a4282
	I0819 12:57:53.177044 2126975 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/test-preload-873920/proxy-client.key
	I0819 12:57:53.177196 2126975 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 12:57:53.177245 2126975 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 12:57:53.177259 2126975 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:57:53.177294 2126975 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 12:57:53.177327 2126975 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:57:53.177357 2126975 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 12:57:53.177414 2126975 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 12:57:53.178140 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:57:53.223529 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:57:53.258806 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:57:53.297768 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 12:57:53.330926 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/test-preload-873920/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 12:57:53.361837 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/test-preload-873920/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 12:57:53.396736 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/test-preload-873920/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:57:53.420207 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/test-preload-873920/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 12:57:53.443108 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 12:57:53.465637 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 12:57:53.489240 2126975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:57:53.511549 2126975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:57:53.527517 2126975 ssh_runner.go:195] Run: openssl version
	I0819 12:57:53.533290 2126975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 12:57:53.544508 2126975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 12:57:53.548977 2126975 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 12:57:53.549050 2126975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 12:57:53.555035 2126975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 12:57:53.566364 2126975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 12:57:53.577953 2126975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 12:57:53.582575 2126975 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 12:57:53.582640 2126975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 12:57:53.588913 2126975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:57:53.600063 2126975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:57:53.610931 2126975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:57:53.615700 2126975 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:57:53.615760 2126975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:57:53.621565 2126975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:57:53.633020 2126975 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:57:53.637752 2126975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 12:57:53.643778 2126975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 12:57:53.649924 2126975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 12:57:53.656224 2126975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 12:57:53.662631 2126975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 12:57:53.668613 2126975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 12:57:53.674478 2126975 kubeadm.go:392] StartCluster: {Name:test-preload-873920 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-873920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:57:53.674577 2126975 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:57:53.674630 2126975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:57:53.712600 2126975 cri.go:89] found id: ""
	I0819 12:57:53.712685 2126975 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 12:57:53.722927 2126975 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 12:57:53.722948 2126975 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 12:57:53.723014 2126975 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 12:57:53.732488 2126975 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:57:53.732947 2126975 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-873920" does not appear in /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:57:53.733116 2126975 kubeconfig.go:62] /home/jenkins/minikube-integration/19479-2086935/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-873920" cluster setting kubeconfig missing "test-preload-873920" context setting]
	I0819 12:57:53.733443 2126975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:57:53.734104 2126975 kapi.go:59] client config for test-preload-873920: &rest.Config{Host:"https://192.168.39.91:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/test-preload-873920/client.crt", KeyFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/test-preload-873920/client.key", CAFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 12:57:53.734818 2126975 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 12:57:53.744430 2126975 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.91
	I0819 12:57:53.744459 2126975 kubeadm.go:1160] stopping kube-system containers ...
	I0819 12:57:53.744470 2126975 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 12:57:53.744510 2126975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:57:53.780590 2126975 cri.go:89] found id: ""
	I0819 12:57:53.780658 2126975 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 12:57:53.797959 2126975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 12:57:53.807794 2126975 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 12:57:53.807813 2126975 kubeadm.go:157] found existing configuration files:
	
	I0819 12:57:53.807854 2126975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 12:57:53.817193 2126975 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 12:57:53.817256 2126975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 12:57:53.827019 2126975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 12:57:53.836724 2126975 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 12:57:53.836791 2126975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 12:57:53.846874 2126975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 12:57:53.856652 2126975 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 12:57:53.856721 2126975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 12:57:53.866345 2126975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 12:57:53.875744 2126975 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 12:57:53.875797 2126975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 12:57:53.885397 2126975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 12:57:53.894923 2126975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 12:57:53.991822 2126975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 12:57:54.987288 2126975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 12:57:55.250850 2126975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 12:57:55.312466 2126975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 12:57:55.415726 2126975 api_server.go:52] waiting for apiserver process to appear ...
	I0819 12:57:55.415862 2126975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:57:55.916346 2126975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:57:56.416332 2126975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:57:56.436766 2126975 api_server.go:72] duration metric: took 1.021055817s to wait for apiserver process to appear ...
	I0819 12:57:56.436792 2126975 api_server.go:88] waiting for apiserver healthz status ...
	I0819 12:57:56.436809 2126975 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0819 12:57:56.437270 2126975 api_server.go:269] stopped: https://192.168.39.91:8443/healthz: Get "https://192.168.39.91:8443/healthz": dial tcp 192.168.39.91:8443: connect: connection refused
	I0819 12:57:56.937061 2126975 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0819 12:58:00.449499 2126975 api_server.go:279] https://192.168.39.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 12:58:00.449529 2126975 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 12:58:00.449545 2126975 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0819 12:58:00.538046 2126975 api_server.go:279] https://192.168.39.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 12:58:00.538082 2126975 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 12:58:00.937621 2126975 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0819 12:58:00.943705 2126975 api_server.go:279] https://192.168.39.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 12:58:00.943745 2126975 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 12:58:01.437262 2126975 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0819 12:58:01.447086 2126975 api_server.go:279] https://192.168.39.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 12:58:01.447115 2126975 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 12:58:01.937772 2126975 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0819 12:58:01.943070 2126975 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I0819 12:58:01.949219 2126975 api_server.go:141] control plane version: v1.24.4
	I0819 12:58:01.949249 2126975 api_server.go:131] duration metric: took 5.512448987s to wait for apiserver health ...
	I0819 12:58:01.949260 2126975 cni.go:84] Creating CNI manager for ""
	I0819 12:58:01.949267 2126975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 12:58:01.950989 2126975 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 12:58:01.952043 2126975 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 12:58:01.963108 2126975 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 12:58:01.980242 2126975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 12:58:01.980355 2126975 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 12:58:01.980380 2126975 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 12:58:01.988880 2126975 system_pods.go:59] 7 kube-system pods found
	I0819 12:58:01.988913 2126975 system_pods.go:61] "coredns-6d4b75cb6d-qlqsm" [79fc2b62-7024-4b19-86cd-5b61854a40b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 12:58:01.988922 2126975 system_pods.go:61] "etcd-test-preload-873920" [9132d7e6-3bc2-4d6b-a06b-dd89880f38c4] Running
	I0819 12:58:01.988930 2126975 system_pods.go:61] "kube-apiserver-test-preload-873920" [59361b30-4ce5-45ba-a1f3-c9c5c466c516] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 12:58:01.988934 2126975 system_pods.go:61] "kube-controller-manager-test-preload-873920" [0740c5d3-c5e3-4fed-8ac2-806f0ec70eba] Running
	I0819 12:58:01.988941 2126975 system_pods.go:61] "kube-proxy-ljcfm" [ca22e36b-6c46-4831-b058-fcc7b0068f76] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 12:58:01.988947 2126975 system_pods.go:61] "kube-scheduler-test-preload-873920" [d19246d9-e57d-410e-995b-482a65bc1787] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 12:58:01.988951 2126975 system_pods.go:61] "storage-provisioner" [0bc4df34-4ca9-49a1-a67f-c7e09b83aa49] Running
	I0819 12:58:01.988958 2126975 system_pods.go:74] duration metric: took 8.693448ms to wait for pod list to return data ...
	I0819 12:58:01.988965 2126975 node_conditions.go:102] verifying NodePressure condition ...
	I0819 12:58:01.992830 2126975 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 12:58:01.992854 2126975 node_conditions.go:123] node cpu capacity is 2
	I0819 12:58:01.992865 2126975 node_conditions.go:105] duration metric: took 3.895809ms to run NodePressure ...
	I0819 12:58:01.992882 2126975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 12:58:02.171127 2126975 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 12:58:02.174956 2126975 kubeadm.go:739] kubelet initialised
	I0819 12:58:02.174976 2126975 kubeadm.go:740] duration metric: took 3.825105ms waiting for restarted kubelet to initialise ...
	I0819 12:58:02.174985 2126975 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:58:02.179677 2126975 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-qlqsm" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:02.183888 2126975 pod_ready.go:98] node "test-preload-873920" hosting pod "coredns-6d4b75cb6d-qlqsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-873920" has status "Ready":"False"
	I0819 12:58:02.183910 2126975 pod_ready.go:82] duration metric: took 4.211203ms for pod "coredns-6d4b75cb6d-qlqsm" in "kube-system" namespace to be "Ready" ...
	E0819 12:58:02.183919 2126975 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-873920" hosting pod "coredns-6d4b75cb6d-qlqsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-873920" has status "Ready":"False"
	I0819 12:58:02.183925 2126975 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:02.187944 2126975 pod_ready.go:98] node "test-preload-873920" hosting pod "etcd-test-preload-873920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-873920" has status "Ready":"False"
	I0819 12:58:02.187968 2126975 pod_ready.go:82] duration metric: took 4.032179ms for pod "etcd-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	E0819 12:58:02.187979 2126975 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-873920" hosting pod "etcd-test-preload-873920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-873920" has status "Ready":"False"
	I0819 12:58:02.187986 2126975 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:02.192375 2126975 pod_ready.go:98] node "test-preload-873920" hosting pod "kube-apiserver-test-preload-873920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-873920" has status "Ready":"False"
	I0819 12:58:02.192396 2126975 pod_ready.go:82] duration metric: took 4.396413ms for pod "kube-apiserver-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	E0819 12:58:02.192406 2126975 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-873920" hosting pod "kube-apiserver-test-preload-873920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-873920" has status "Ready":"False"
	I0819 12:58:02.192412 2126975 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:02.385231 2126975 pod_ready.go:98] node "test-preload-873920" hosting pod "kube-controller-manager-test-preload-873920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-873920" has status "Ready":"False"
	I0819 12:58:02.385272 2126975 pod_ready.go:82] duration metric: took 192.847869ms for pod "kube-controller-manager-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	E0819 12:58:02.385286 2126975 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-873920" hosting pod "kube-controller-manager-test-preload-873920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-873920" has status "Ready":"False"
	I0819 12:58:02.385295 2126975 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ljcfm" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:02.785922 2126975 pod_ready.go:98] node "test-preload-873920" hosting pod "kube-proxy-ljcfm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-873920" has status "Ready":"False"
	I0819 12:58:02.785963 2126975 pod_ready.go:82] duration metric: took 400.654601ms for pod "kube-proxy-ljcfm" in "kube-system" namespace to be "Ready" ...
	E0819 12:58:02.785977 2126975 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-873920" hosting pod "kube-proxy-ljcfm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-873920" has status "Ready":"False"
	I0819 12:58:02.785985 2126975 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:03.184710 2126975 pod_ready.go:98] node "test-preload-873920" hosting pod "kube-scheduler-test-preload-873920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-873920" has status "Ready":"False"
	I0819 12:58:03.184752 2126975 pod_ready.go:82] duration metric: took 398.758943ms for pod "kube-scheduler-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	E0819 12:58:03.184767 2126975 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-873920" hosting pod "kube-scheduler-test-preload-873920" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-873920" has status "Ready":"False"
	I0819 12:58:03.184777 2126975 pod_ready.go:39] duration metric: took 1.009783007s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:58:03.184800 2126975 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 12:58:03.197666 2126975 ops.go:34] apiserver oom_adj: -16
	I0819 12:58:03.197703 2126975 kubeadm.go:597] duration metric: took 9.47473633s to restartPrimaryControlPlane
	I0819 12:58:03.197714 2126975 kubeadm.go:394] duration metric: took 9.523246442s to StartCluster
	I0819 12:58:03.197736 2126975 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:58:03.197819 2126975 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:58:03.198543 2126975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:58:03.198782 2126975 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:58:03.198905 2126975 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 12:58:03.198970 2126975 addons.go:69] Setting storage-provisioner=true in profile "test-preload-873920"
	I0819 12:58:03.198990 2126975 addons.go:69] Setting default-storageclass=true in profile "test-preload-873920"
	I0819 12:58:03.199027 2126975 config.go:182] Loaded profile config "test-preload-873920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0819 12:58:03.199040 2126975 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-873920"
	I0819 12:58:03.199002 2126975 addons.go:234] Setting addon storage-provisioner=true in "test-preload-873920"
	W0819 12:58:03.199095 2126975 addons.go:243] addon storage-provisioner should already be in state true
	I0819 12:58:03.199126 2126975 host.go:66] Checking if "test-preload-873920" exists ...
	I0819 12:58:03.199364 2126975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:58:03.199401 2126975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:58:03.199492 2126975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:58:03.199533 2126975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:58:03.200378 2126975 out.go:177] * Verifying Kubernetes components...
	I0819 12:58:03.201615 2126975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:58:03.214676 2126975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37553
	I0819 12:58:03.214891 2126975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42167
	I0819 12:58:03.215073 2126975 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:58:03.215366 2126975 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:58:03.215582 2126975 main.go:141] libmachine: Using API Version  1
	I0819 12:58:03.215609 2126975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:58:03.215769 2126975 main.go:141] libmachine: Using API Version  1
	I0819 12:58:03.215793 2126975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:58:03.215992 2126975 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:58:03.216138 2126975 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:58:03.216193 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetState
	I0819 12:58:03.216766 2126975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:58:03.216820 2126975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:58:03.218837 2126975 kapi.go:59] client config for test-preload-873920: &rest.Config{Host:"https://192.168.39.91:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/test-preload-873920/client.crt", KeyFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/test-preload-873920/client.key", CAFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 12:58:03.219178 2126975 addons.go:234] Setting addon default-storageclass=true in "test-preload-873920"
	W0819 12:58:03.219206 2126975 addons.go:243] addon default-storageclass should already be in state true
	I0819 12:58:03.219235 2126975 host.go:66] Checking if "test-preload-873920" exists ...
	I0819 12:58:03.219599 2126975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:58:03.219666 2126975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:58:03.231849 2126975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I0819 12:58:03.232338 2126975 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:58:03.232847 2126975 main.go:141] libmachine: Using API Version  1
	I0819 12:58:03.232904 2126975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:58:03.233203 2126975 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:58:03.233418 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetState
	I0819 12:58:03.233661 2126975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44761
	I0819 12:58:03.234035 2126975 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:58:03.234451 2126975 main.go:141] libmachine: Using API Version  1
	I0819 12:58:03.234475 2126975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:58:03.234814 2126975 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:58:03.235063 2126975 main.go:141] libmachine: (test-preload-873920) Calling .DriverName
	I0819 12:58:03.235407 2126975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:58:03.235433 2126975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:58:03.237276 2126975 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:58:03.238490 2126975 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:58:03.238513 2126975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 12:58:03.238529 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHHostname
	I0819 12:58:03.241712 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:58:03.242270 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:58:03.242297 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:58:03.242458 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHPort
	I0819 12:58:03.242633 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:58:03.242747 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHUsername
	I0819 12:58:03.242870 2126975 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/test-preload-873920/id_rsa Username:docker}
	I0819 12:58:03.249950 2126975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37167
	I0819 12:58:03.250368 2126975 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:58:03.250800 2126975 main.go:141] libmachine: Using API Version  1
	I0819 12:58:03.250827 2126975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:58:03.251119 2126975 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:58:03.251292 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetState
	I0819 12:58:03.252756 2126975 main.go:141] libmachine: (test-preload-873920) Calling .DriverName
	I0819 12:58:03.252918 2126975 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 12:58:03.252930 2126975 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 12:58:03.252941 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHHostname
	I0819 12:58:03.255629 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:58:03.256047 2126975 main.go:141] libmachine: (test-preload-873920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:57:c9", ip: ""} in network mk-test-preload-873920: {Iface:virbr1 ExpiryTime:2024-08-19 13:57:28 +0000 UTC Type:0 Mac:52:54:00:b1:57:c9 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:test-preload-873920 Clientid:01:52:54:00:b1:57:c9}
	I0819 12:58:03.256087 2126975 main.go:141] libmachine: (test-preload-873920) DBG | domain test-preload-873920 has defined IP address 192.168.39.91 and MAC address 52:54:00:b1:57:c9 in network mk-test-preload-873920
	I0819 12:58:03.256250 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHPort
	I0819 12:58:03.256403 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHKeyPath
	I0819 12:58:03.256516 2126975 main.go:141] libmachine: (test-preload-873920) Calling .GetSSHUsername
	I0819 12:58:03.256612 2126975 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/test-preload-873920/id_rsa Username:docker}
	I0819 12:58:03.370799 2126975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:58:03.392792 2126975 node_ready.go:35] waiting up to 6m0s for node "test-preload-873920" to be "Ready" ...
	I0819 12:58:03.484560 2126975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:58:03.486785 2126975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 12:58:04.454754 2126975 main.go:141] libmachine: Making call to close driver server
	I0819 12:58:04.454786 2126975 main.go:141] libmachine: (test-preload-873920) Calling .Close
	I0819 12:58:04.454811 2126975 main.go:141] libmachine: Making call to close driver server
	I0819 12:58:04.454832 2126975 main.go:141] libmachine: (test-preload-873920) Calling .Close
	I0819 12:58:04.455137 2126975 main.go:141] libmachine: (test-preload-873920) DBG | Closing plugin on server side
	I0819 12:58:04.455182 2126975 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:58:04.455190 2126975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:58:04.455201 2126975 main.go:141] libmachine: (test-preload-873920) DBG | Closing plugin on server side
	I0819 12:58:04.455217 2126975 main.go:141] libmachine: Making call to close driver server
	I0819 12:58:04.455217 2126975 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:58:04.455238 2126975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:58:04.455248 2126975 main.go:141] libmachine: Making call to close driver server
	I0819 12:58:04.455262 2126975 main.go:141] libmachine: (test-preload-873920) Calling .Close
	I0819 12:58:04.455228 2126975 main.go:141] libmachine: (test-preload-873920) Calling .Close
	I0819 12:58:04.455498 2126975 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:58:04.455514 2126975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:58:04.455548 2126975 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:58:04.455563 2126975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:58:04.461611 2126975 main.go:141] libmachine: Making call to close driver server
	I0819 12:58:04.461633 2126975 main.go:141] libmachine: (test-preload-873920) Calling .Close
	I0819 12:58:04.461876 2126975 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:58:04.461902 2126975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:58:04.461910 2126975 main.go:141] libmachine: (test-preload-873920) DBG | Closing plugin on server side
	I0819 12:58:04.463571 2126975 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 12:58:04.464852 2126975 addons.go:510] duration metric: took 1.26595165s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 12:58:05.397388 2126975 node_ready.go:53] node "test-preload-873920" has status "Ready":"False"
	I0819 12:58:07.897172 2126975 node_ready.go:53] node "test-preload-873920" has status "Ready":"False"
	I0819 12:58:10.396250 2126975 node_ready.go:53] node "test-preload-873920" has status "Ready":"False"
	I0819 12:58:11.396481 2126975 node_ready.go:49] node "test-preload-873920" has status "Ready":"True"
	I0819 12:58:11.396509 2126975 node_ready.go:38] duration metric: took 8.003681515s for node "test-preload-873920" to be "Ready" ...
	I0819 12:58:11.396519 2126975 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:58:11.402010 2126975 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-qlqsm" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:11.406408 2126975 pod_ready.go:93] pod "coredns-6d4b75cb6d-qlqsm" in "kube-system" namespace has status "Ready":"True"
	I0819 12:58:11.406429 2126975 pod_ready.go:82] duration metric: took 4.393541ms for pod "coredns-6d4b75cb6d-qlqsm" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:11.406441 2126975 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:11.410375 2126975 pod_ready.go:93] pod "etcd-test-preload-873920" in "kube-system" namespace has status "Ready":"True"
	I0819 12:58:11.410394 2126975 pod_ready.go:82] duration metric: took 3.945662ms for pod "etcd-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:11.410405 2126975 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:11.414370 2126975 pod_ready.go:93] pod "kube-apiserver-test-preload-873920" in "kube-system" namespace has status "Ready":"True"
	I0819 12:58:11.414386 2126975 pod_ready.go:82] duration metric: took 3.974708ms for pod "kube-apiserver-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:11.414394 2126975 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:11.418030 2126975 pod_ready.go:93] pod "kube-controller-manager-test-preload-873920" in "kube-system" namespace has status "Ready":"True"
	I0819 12:58:11.418054 2126975 pod_ready.go:82] duration metric: took 3.648569ms for pod "kube-controller-manager-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:11.418064 2126975 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ljcfm" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:11.796936 2126975 pod_ready.go:93] pod "kube-proxy-ljcfm" in "kube-system" namespace has status "Ready":"True"
	I0819 12:58:11.796961 2126975 pod_ready.go:82] duration metric: took 378.889951ms for pod "kube-proxy-ljcfm" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:11.796970 2126975 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:13.803348 2126975 pod_ready.go:103] pod "kube-scheduler-test-preload-873920" in "kube-system" namespace has status "Ready":"False"
	I0819 12:58:14.303468 2126975 pod_ready.go:93] pod "kube-scheduler-test-preload-873920" in "kube-system" namespace has status "Ready":"True"
	I0819 12:58:14.303497 2126975 pod_ready.go:82] duration metric: took 2.506520949s for pod "kube-scheduler-test-preload-873920" in "kube-system" namespace to be "Ready" ...
	I0819 12:58:14.303512 2126975 pod_ready.go:39] duration metric: took 2.906978963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:58:14.303527 2126975 api_server.go:52] waiting for apiserver process to appear ...
	I0819 12:58:14.303580 2126975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:58:14.320427 2126975 api_server.go:72] duration metric: took 11.121606323s to wait for apiserver process to appear ...
	I0819 12:58:14.320458 2126975 api_server.go:88] waiting for apiserver healthz status ...
	I0819 12:58:14.320483 2126975 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0819 12:58:14.328147 2126975 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I0819 12:58:14.329760 2126975 api_server.go:141] control plane version: v1.24.4
	I0819 12:58:14.329785 2126975 api_server.go:131] duration metric: took 9.317539ms to wait for apiserver health ...
	I0819 12:58:14.329795 2126975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 12:58:14.401946 2126975 system_pods.go:59] 7 kube-system pods found
	I0819 12:58:14.401976 2126975 system_pods.go:61] "coredns-6d4b75cb6d-qlqsm" [79fc2b62-7024-4b19-86cd-5b61854a40b6] Running
	I0819 12:58:14.401981 2126975 system_pods.go:61] "etcd-test-preload-873920" [9132d7e6-3bc2-4d6b-a06b-dd89880f38c4] Running
	I0819 12:58:14.401985 2126975 system_pods.go:61] "kube-apiserver-test-preload-873920" [59361b30-4ce5-45ba-a1f3-c9c5c466c516] Running
	I0819 12:58:14.401989 2126975 system_pods.go:61] "kube-controller-manager-test-preload-873920" [0740c5d3-c5e3-4fed-8ac2-806f0ec70eba] Running
	I0819 12:58:14.401992 2126975 system_pods.go:61] "kube-proxy-ljcfm" [ca22e36b-6c46-4831-b058-fcc7b0068f76] Running
	I0819 12:58:14.401996 2126975 system_pods.go:61] "kube-scheduler-test-preload-873920" [d19246d9-e57d-410e-995b-482a65bc1787] Running
	I0819 12:58:14.401999 2126975 system_pods.go:61] "storage-provisioner" [0bc4df34-4ca9-49a1-a67f-c7e09b83aa49] Running
	I0819 12:58:14.402004 2126975 system_pods.go:74] duration metric: took 72.203806ms to wait for pod list to return data ...
	I0819 12:58:14.402011 2126975 default_sa.go:34] waiting for default service account to be created ...
	I0819 12:58:14.600118 2126975 default_sa.go:45] found service account: "default"
	I0819 12:58:14.600147 2126975 default_sa.go:55] duration metric: took 198.130432ms for default service account to be created ...
	I0819 12:58:14.600157 2126975 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 12:58:14.805814 2126975 system_pods.go:86] 7 kube-system pods found
	I0819 12:58:14.805843 2126975 system_pods.go:89] "coredns-6d4b75cb6d-qlqsm" [79fc2b62-7024-4b19-86cd-5b61854a40b6] Running
	I0819 12:58:14.805848 2126975 system_pods.go:89] "etcd-test-preload-873920" [9132d7e6-3bc2-4d6b-a06b-dd89880f38c4] Running
	I0819 12:58:14.805852 2126975 system_pods.go:89] "kube-apiserver-test-preload-873920" [59361b30-4ce5-45ba-a1f3-c9c5c466c516] Running
	I0819 12:58:14.805856 2126975 system_pods.go:89] "kube-controller-manager-test-preload-873920" [0740c5d3-c5e3-4fed-8ac2-806f0ec70eba] Running
	I0819 12:58:14.805859 2126975 system_pods.go:89] "kube-proxy-ljcfm" [ca22e36b-6c46-4831-b058-fcc7b0068f76] Running
	I0819 12:58:14.805862 2126975 system_pods.go:89] "kube-scheduler-test-preload-873920" [d19246d9-e57d-410e-995b-482a65bc1787] Running
	I0819 12:58:14.805865 2126975 system_pods.go:89] "storage-provisioner" [0bc4df34-4ca9-49a1-a67f-c7e09b83aa49] Running
	I0819 12:58:14.805874 2126975 system_pods.go:126] duration metric: took 205.710003ms to wait for k8s-apps to be running ...
	I0819 12:58:14.805881 2126975 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 12:58:14.805929 2126975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:58:14.821892 2126975 system_svc.go:56] duration metric: took 16.000318ms WaitForService to wait for kubelet
	I0819 12:58:14.821933 2126975 kubeadm.go:582] duration metric: took 11.623117727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:58:14.822048 2126975 node_conditions.go:102] verifying NodePressure condition ...
	I0819 12:58:14.999406 2126975 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 12:58:14.999436 2126975 node_conditions.go:123] node cpu capacity is 2
	I0819 12:58:14.999452 2126975 node_conditions.go:105] duration metric: took 177.391974ms to run NodePressure ...
	I0819 12:58:14.999466 2126975 start.go:241] waiting for startup goroutines ...
	I0819 12:58:14.999483 2126975 start.go:246] waiting for cluster config update ...
	I0819 12:58:14.999497 2126975 start.go:255] writing updated cluster config ...
	I0819 12:58:14.999779 2126975 ssh_runner.go:195] Run: rm -f paused
	I0819 12:58:15.050013 2126975 start.go:600] kubectl: 1.31.0, cluster: 1.24.4 (minor skew: 7)
	I0819 12:58:15.051830 2126975 out.go:201] 
	W0819 12:58:15.053019 2126975 out.go:270] ! /usr/local/bin/kubectl is version 1.31.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0819 12:58:15.054308 2126975 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0819 12:58:15.055385 2126975 out.go:177] * Done! kubectl is now configured to use "test-preload-873920" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 12:58:15 test-preload-873920 crio[690]: time="2024-08-19 12:58:15.957212788Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072295957193863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=717c2375-ddf7-473e-926c-7e7cfd365174 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:58:15 test-preload-873920 crio[690]: time="2024-08-19 12:58:15.957710851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa5f010c-fc82-427c-a942-5552a3af8067 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:58:15 test-preload-873920 crio[690]: time="2024-08-19 12:58:15.957778768Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa5f010c-fc82-427c-a942-5552a3af8067 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:58:15 test-preload-873920 crio[690]: time="2024-08-19 12:58:15.957972287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aae3adf12c77cba99b27ad8adfa119d55c6a6a2d7b3ef5df3a5666598c02393f,PodSandboxId:d3738bc427176bd3a1d2b0c8829e76a6e7fdfc33f17e2609152c653dd57f21c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724072289614973295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qlqsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79fc2b62-7024-4b19-86cd-5b61854a40b6,},Annotations:map[string]string{io.kubernetes.container.hash: 63897b6b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:406a59bf08bf36fcbfba8b2e8dc4bd236db49e579023ababf33927ff5555d999,PodSandboxId:b16a836020487f3c441add3463635c63eadc07f6b88db1d696050d09823e9e8a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724072282456071458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ljcfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ca22e36b-6c46-4831-b058-fcc7b0068f76,},Annotations:map[string]string{io.kubernetes.container.hash: 2f89d2e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b02959257b5a534515c2df4a75d6bded17e3c29f68ed13f1adf5598803b2de,PodSandboxId:aa0f20277e39515204a233c1f859782a97541ad45fb9317a0b573bd0fb9f672d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724072282414017491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
c4df34-4ca9-49a1-a67f-c7e09b83aa49,},Annotations:map[string]string{io.kubernetes.container.hash: 2b569e54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3d20428d4a81721c370ec01f1b175e2c48d324c83cb7731e5025dd69cc67f1,PodSandboxId:f5a1e82f408e70f6fd22a46fbc57a2902907389d9d9cbc5bd587ed76487ec9b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724072276182647877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dda5b40d
60ef6be2c00b49e7b901daa,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:711916b1414e82f803fa6c470bfc1c396bb814d24e6bb816e039a85626040de5,PodSandboxId:70b3a54e4efb0a428a9feee39c293df5cfc4640477932f0a75e3bf4c68216168,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724072276175556126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012c91878c7a19d204d9d1ee5576e242,},Annotations:map
[string]string{io.kubernetes.container.hash: 7787d303,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15705c32a5d4af3ac8bd9c343d4d9f0ae2cf5726d50a25dcfbc5de5368f851dc,PodSandboxId:5b0616d72920f49e62afc534b18349cc765cf5c4946f43ba2190eca3d9a180e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724072276132396657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c332e2007ab936c515fad5c542abc76,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a62c9ef2fc4e477dacafcf14d3eab4bc5e4f668d3bc77381aa1ecb540bf6b7d,PodSandboxId:b1133ea1b65d4e9ef5c442e067a32eedb2e30350482785b0c8f5129d0d733ebb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724072276100644480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 640b15cb287cfe54d5c81dfda6f6879c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1cd38cd7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa5f010c-fc82-427c-a942-5552a3af8067 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:58:15 test-preload-873920 crio[690]: time="2024-08-19 12:58:15.997032294Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00876c73-6837-471a-8468-e85492071afa name=/runtime.v1.RuntimeService/Version
	Aug 19 12:58:15 test-preload-873920 crio[690]: time="2024-08-19 12:58:15.997103391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00876c73-6837-471a-8468-e85492071afa name=/runtime.v1.RuntimeService/Version
	Aug 19 12:58:15 test-preload-873920 crio[690]: time="2024-08-19 12:58:15.998362553Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2455baee-b1af-4bd1-b3cc-75a6cf9041c4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:15.999490086Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072295999435126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2455baee-b1af-4bd1-b3cc-75a6cf9041c4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.002198238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a09da482-2f78-44ff-a0dc-abc8ffcd4f2b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.002336624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a09da482-2f78-44ff-a0dc-abc8ffcd4f2b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.002540903Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aae3adf12c77cba99b27ad8adfa119d55c6a6a2d7b3ef5df3a5666598c02393f,PodSandboxId:d3738bc427176bd3a1d2b0c8829e76a6e7fdfc33f17e2609152c653dd57f21c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724072289614973295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qlqsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79fc2b62-7024-4b19-86cd-5b61854a40b6,},Annotations:map[string]string{io.kubernetes.container.hash: 63897b6b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:406a59bf08bf36fcbfba8b2e8dc4bd236db49e579023ababf33927ff5555d999,PodSandboxId:b16a836020487f3c441add3463635c63eadc07f6b88db1d696050d09823e9e8a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724072282456071458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ljcfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ca22e36b-6c46-4831-b058-fcc7b0068f76,},Annotations:map[string]string{io.kubernetes.container.hash: 2f89d2e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b02959257b5a534515c2df4a75d6bded17e3c29f68ed13f1adf5598803b2de,PodSandboxId:aa0f20277e39515204a233c1f859782a97541ad45fb9317a0b573bd0fb9f672d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724072282414017491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
c4df34-4ca9-49a1-a67f-c7e09b83aa49,},Annotations:map[string]string{io.kubernetes.container.hash: 2b569e54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3d20428d4a81721c370ec01f1b175e2c48d324c83cb7731e5025dd69cc67f1,PodSandboxId:f5a1e82f408e70f6fd22a46fbc57a2902907389d9d9cbc5bd587ed76487ec9b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724072276182647877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dda5b40d
60ef6be2c00b49e7b901daa,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:711916b1414e82f803fa6c470bfc1c396bb814d24e6bb816e039a85626040de5,PodSandboxId:70b3a54e4efb0a428a9feee39c293df5cfc4640477932f0a75e3bf4c68216168,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724072276175556126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012c91878c7a19d204d9d1ee5576e242,},Annotations:map
[string]string{io.kubernetes.container.hash: 7787d303,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15705c32a5d4af3ac8bd9c343d4d9f0ae2cf5726d50a25dcfbc5de5368f851dc,PodSandboxId:5b0616d72920f49e62afc534b18349cc765cf5c4946f43ba2190eca3d9a180e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724072276132396657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c332e2007ab936c515fad5c542abc76,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a62c9ef2fc4e477dacafcf14d3eab4bc5e4f668d3bc77381aa1ecb540bf6b7d,PodSandboxId:b1133ea1b65d4e9ef5c442e067a32eedb2e30350482785b0c8f5129d0d733ebb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724072276100644480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 640b15cb287cfe54d5c81dfda6f6879c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1cd38cd7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a09da482-2f78-44ff-a0dc-abc8ffcd4f2b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.038674280Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c80ff2b-2df6-4829-b9b1-a9050c2331bc name=/runtime.v1.RuntimeService/Version
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.038748187Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c80ff2b-2df6-4829-b9b1-a9050c2331bc name=/runtime.v1.RuntimeService/Version
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.039966914Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=502ef769-327a-4ab4-bfdc-338a2473664e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.040460297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072296040420879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=502ef769-327a-4ab4-bfdc-338a2473664e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.041050869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1891047d-d924-41d6-8d8e-828129d077d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.041101671Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1891047d-d924-41d6-8d8e-828129d077d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.041302474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aae3adf12c77cba99b27ad8adfa119d55c6a6a2d7b3ef5df3a5666598c02393f,PodSandboxId:d3738bc427176bd3a1d2b0c8829e76a6e7fdfc33f17e2609152c653dd57f21c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724072289614973295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qlqsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79fc2b62-7024-4b19-86cd-5b61854a40b6,},Annotations:map[string]string{io.kubernetes.container.hash: 63897b6b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:406a59bf08bf36fcbfba8b2e8dc4bd236db49e579023ababf33927ff5555d999,PodSandboxId:b16a836020487f3c441add3463635c63eadc07f6b88db1d696050d09823e9e8a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724072282456071458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ljcfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ca22e36b-6c46-4831-b058-fcc7b0068f76,},Annotations:map[string]string{io.kubernetes.container.hash: 2f89d2e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b02959257b5a534515c2df4a75d6bded17e3c29f68ed13f1adf5598803b2de,PodSandboxId:aa0f20277e39515204a233c1f859782a97541ad45fb9317a0b573bd0fb9f672d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724072282414017491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
c4df34-4ca9-49a1-a67f-c7e09b83aa49,},Annotations:map[string]string{io.kubernetes.container.hash: 2b569e54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3d20428d4a81721c370ec01f1b175e2c48d324c83cb7731e5025dd69cc67f1,PodSandboxId:f5a1e82f408e70f6fd22a46fbc57a2902907389d9d9cbc5bd587ed76487ec9b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724072276182647877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dda5b40d
60ef6be2c00b49e7b901daa,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:711916b1414e82f803fa6c470bfc1c396bb814d24e6bb816e039a85626040de5,PodSandboxId:70b3a54e4efb0a428a9feee39c293df5cfc4640477932f0a75e3bf4c68216168,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724072276175556126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012c91878c7a19d204d9d1ee5576e242,},Annotations:map
[string]string{io.kubernetes.container.hash: 7787d303,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15705c32a5d4af3ac8bd9c343d4d9f0ae2cf5726d50a25dcfbc5de5368f851dc,PodSandboxId:5b0616d72920f49e62afc534b18349cc765cf5c4946f43ba2190eca3d9a180e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724072276132396657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c332e2007ab936c515fad5c542abc76,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a62c9ef2fc4e477dacafcf14d3eab4bc5e4f668d3bc77381aa1ecb540bf6b7d,PodSandboxId:b1133ea1b65d4e9ef5c442e067a32eedb2e30350482785b0c8f5129d0d733ebb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724072276100644480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 640b15cb287cfe54d5c81dfda6f6879c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1cd38cd7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1891047d-d924-41d6-8d8e-828129d077d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.074098284Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07203c1e-06d3-41a8-b078-a832bfc92345 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.074167516Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07203c1e-06d3-41a8-b078-a832bfc92345 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.075458980Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54228f7e-c35d-4723-8af9-2fccf9fa56cf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.075879652Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072296075859108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54228f7e-c35d-4723-8af9-2fccf9fa56cf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.076594188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f099eb29-0737-4130-8db6-aa987579b0be name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.076647485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f099eb29-0737-4130-8db6-aa987579b0be name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:58:16 test-preload-873920 crio[690]: time="2024-08-19 12:58:16.076813018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aae3adf12c77cba99b27ad8adfa119d55c6a6a2d7b3ef5df3a5666598c02393f,PodSandboxId:d3738bc427176bd3a1d2b0c8829e76a6e7fdfc33f17e2609152c653dd57f21c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724072289614973295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qlqsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79fc2b62-7024-4b19-86cd-5b61854a40b6,},Annotations:map[string]string{io.kubernetes.container.hash: 63897b6b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:406a59bf08bf36fcbfba8b2e8dc4bd236db49e579023ababf33927ff5555d999,PodSandboxId:b16a836020487f3c441add3463635c63eadc07f6b88db1d696050d09823e9e8a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724072282456071458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ljcfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ca22e36b-6c46-4831-b058-fcc7b0068f76,},Annotations:map[string]string{io.kubernetes.container.hash: 2f89d2e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b02959257b5a534515c2df4a75d6bded17e3c29f68ed13f1adf5598803b2de,PodSandboxId:aa0f20277e39515204a233c1f859782a97541ad45fb9317a0b573bd0fb9f672d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724072282414017491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
c4df34-4ca9-49a1-a67f-c7e09b83aa49,},Annotations:map[string]string{io.kubernetes.container.hash: 2b569e54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3d20428d4a81721c370ec01f1b175e2c48d324c83cb7731e5025dd69cc67f1,PodSandboxId:f5a1e82f408e70f6fd22a46fbc57a2902907389d9d9cbc5bd587ed76487ec9b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724072276182647877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dda5b40d
60ef6be2c00b49e7b901daa,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:711916b1414e82f803fa6c470bfc1c396bb814d24e6bb816e039a85626040de5,PodSandboxId:70b3a54e4efb0a428a9feee39c293df5cfc4640477932f0a75e3bf4c68216168,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724072276175556126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012c91878c7a19d204d9d1ee5576e242,},Annotations:map
[string]string{io.kubernetes.container.hash: 7787d303,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15705c32a5d4af3ac8bd9c343d4d9f0ae2cf5726d50a25dcfbc5de5368f851dc,PodSandboxId:5b0616d72920f49e62afc534b18349cc765cf5c4946f43ba2190eca3d9a180e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724072276132396657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c332e2007ab936c515fad5c542abc76,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a62c9ef2fc4e477dacafcf14d3eab4bc5e4f668d3bc77381aa1ecb540bf6b7d,PodSandboxId:b1133ea1b65d4e9ef5c442e067a32eedb2e30350482785b0c8f5129d0d733ebb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724072276100644480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-873920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 640b15cb287cfe54d5c81dfda6f6879c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1cd38cd7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f099eb29-0737-4130-8db6-aa987579b0be name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	aae3adf12c77c       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   d3738bc427176       coredns-6d4b75cb6d-qlqsm
	406a59bf08bf3       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   b16a836020487       kube-proxy-ljcfm
	50b02959257b5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   aa0f20277e395       storage-provisioner
	8c3d20428d4a8       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   f5a1e82f408e7       kube-scheduler-test-preload-873920
	711916b1414e8       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   70b3a54e4efb0       etcd-test-preload-873920
	15705c32a5d4a       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   5b0616d72920f       kube-controller-manager-test-preload-873920
	7a62c9ef2fc4e       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   b1133ea1b65d4       kube-apiserver-test-preload-873920
	
	
	==> coredns [aae3adf12c77cba99b27ad8adfa119d55c6a6a2d7b3ef5df3a5666598c02393f] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:42473 - 54175 "HINFO IN 6950845208122733999.5992742973072638844. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019297531s
	
	
	==> describe nodes <==
	Name:               test-preload-873920
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-873920
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=test-preload-873920
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T12_56_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:56:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-873920
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:58:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:58:10 +0000   Mon, 19 Aug 2024 12:56:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:58:10 +0000   Mon, 19 Aug 2024 12:56:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:58:10 +0000   Mon, 19 Aug 2024 12:56:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:58:10 +0000   Mon, 19 Aug 2024 12:58:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.91
	  Hostname:    test-preload-873920
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c735cad17be4e7eabadef8fb840e966
	  System UUID:                4c735cad-17be-4e7e-abad-ef8fb840e966
	  Boot ID:                    282e6d64-f96b-444b-8e81-cfc95f69f6b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-qlqsm                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     76s
	  kube-system                 etcd-test-preload-873920                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         90s
	  kube-system                 kube-apiserver-test-preload-873920             250m (12%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-test-preload-873920    200m (10%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-ljcfm                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-test-preload-873920             100m (5%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 74s                kube-proxy       
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  89s                kubelet          Node test-preload-873920 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s                kubelet          Node test-preload-873920 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s                kubelet          Node test-preload-873920 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                79s                kubelet          Node test-preload-873920 status is now: NodeReady
	  Normal  RegisteredNode           77s                node-controller  Node test-preload-873920 event: Registered Node test-preload-873920 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-873920 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-873920 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-873920 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-873920 event: Registered Node test-preload-873920 in Controller
	
	
	==> dmesg <==
	[Aug19 12:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050556] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040574] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.782415] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.394565] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.594559] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.594854] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.059848] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067300] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.156185] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.139022] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.279041] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[ +14.191867] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.056327] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.020350] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +3.790340] kauditd_printk_skb: 105 callbacks suppressed
	[Aug19 12:58] systemd-fstab-generator[1765]: Ignoring "noauto" option for root device
	[  +6.154763] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [711916b1414e82f803fa6c470bfc1c396bb814d24e6bb816e039a85626040de5] <==
	{"level":"info","ts":"2024-08-19T12:57:56.553Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"3a19c1a50e8a825c","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-19T12:57:56.555Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-19T12:57:56.556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a19c1a50e8a825c switched to configuration voters=(4186590243275309660)"}
	{"level":"info","ts":"2024-08-19T12:57:56.557Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"674de9ca81299bdc","local-member-id":"3a19c1a50e8a825c","added-peer-id":"3a19c1a50e8a825c","added-peer-peer-urls":["https://192.168.39.91:2380"]}
	{"level":"info","ts":"2024-08-19T12:57:56.558Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"674de9ca81299bdc","local-member-id":"3a19c1a50e8a825c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:57:56.558Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:57:56.570Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T12:57:56.570Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.91:2380"}
	{"level":"info","ts":"2024-08-19T12:57:56.570Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.91:2380"}
	{"level":"info","ts":"2024-08-19T12:57:56.572Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T12:57:56.572Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3a19c1a50e8a825c","initial-advertise-peer-urls":["https://192.168.39.91:2380"],"listen-peer-urls":["https://192.168.39.91:2380"],"advertise-client-urls":["https://192.168.39.91:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.91:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T12:57:58.017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a19c1a50e8a825c is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T12:57:58.018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a19c1a50e8a825c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T12:57:58.018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a19c1a50e8a825c received MsgPreVoteResp from 3a19c1a50e8a825c at term 2"}
	{"level":"info","ts":"2024-08-19T12:57:58.018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a19c1a50e8a825c became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T12:57:58.018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a19c1a50e8a825c received MsgVoteResp from 3a19c1a50e8a825c at term 3"}
	{"level":"info","ts":"2024-08-19T12:57:58.018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a19c1a50e8a825c became leader at term 3"}
	{"level":"info","ts":"2024-08-19T12:57:58.018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3a19c1a50e8a825c elected leader 3a19c1a50e8a825c at term 3"}
	{"level":"info","ts":"2024-08-19T12:57:58.018Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"3a19c1a50e8a825c","local-member-attributes":"{Name:test-preload-873920 ClientURLs:[https://192.168.39.91:2379]}","request-path":"/0/members/3a19c1a50e8a825c/attributes","cluster-id":"674de9ca81299bdc","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T12:57:58.018Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:57:58.020Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T12:57:58.020Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:57:58.021Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.91:2379"}
	{"level":"info","ts":"2024-08-19T12:57:58.030Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T12:57:58.030Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:58:16 up 0 min,  0 users,  load average: 0.57, 0.15, 0.05
	Linux test-preload-873920 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7a62c9ef2fc4e477dacafcf14d3eab4bc5e4f668d3bc77381aa1ecb540bf6b7d] <==
	I0819 12:58:00.439351       1 naming_controller.go:291] Starting NamingConditionController
	I0819 12:58:00.439588       1 establishing_controller.go:76] Starting EstablishingController
	I0819 12:58:00.439650       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0819 12:58:00.439683       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0819 12:58:00.439698       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0819 12:58:00.439808       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0819 12:58:00.439830       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0819 12:58:00.502506       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0819 12:58:00.503026       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0819 12:58:00.505429       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0819 12:58:00.539977       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0819 12:58:00.540667       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 12:58:00.570060       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0819 12:58:00.576645       1 cache.go:39] Caches are synced for autoregister controller
	I0819 12:58:00.576887       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 12:58:01.066191       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0819 12:58:01.372908       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 12:58:02.085952       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0819 12:58:02.095956       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0819 12:58:02.133690       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0819 12:58:02.151774       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 12:58:02.158155       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 12:58:02.829892       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0819 12:58:12.941203       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 12:58:13.090003       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [15705c32a5d4af3ac8bd9c343d4d9f0ae2cf5726d50a25dcfbc5de5368f851dc] <==
	I0819 12:58:12.964322       1 shared_informer.go:262] Caches are synced for expand
	I0819 12:58:12.966627       1 shared_informer.go:262] Caches are synced for PVC protection
	I0819 12:58:12.973411       1 shared_informer.go:262] Caches are synced for persistent volume
	I0819 12:58:12.974759       1 shared_informer.go:262] Caches are synced for stateful set
	I0819 12:58:13.003112       1 shared_informer.go:262] Caches are synced for attach detach
	I0819 12:58:13.068953       1 shared_informer.go:262] Caches are synced for deployment
	I0819 12:58:13.070331       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0819 12:58:13.080731       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0819 12:58:13.080842       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0819 12:58:13.081378       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0819 12:58:13.083640       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0819 12:58:13.085976       1 shared_informer.go:262] Caches are synced for disruption
	I0819 12:58:13.086004       1 disruption.go:371] Sending events to api server.
	I0819 12:58:13.141953       1 shared_informer.go:262] Caches are synced for resource quota
	I0819 12:58:13.142241       1 shared_informer.go:262] Caches are synced for resource quota
	I0819 12:58:13.152373       1 shared_informer.go:262] Caches are synced for daemon sets
	I0819 12:58:13.183494       1 shared_informer.go:262] Caches are synced for taint
	I0819 12:58:13.183634       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0819 12:58:13.183761       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-873920. Assuming now as a timestamp.
	I0819 12:58:13.183809       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0819 12:58:13.184131       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0819 12:58:13.184392       1 event.go:294] "Event occurred" object="test-preload-873920" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-873920 event: Registered Node test-preload-873920 in Controller"
	I0819 12:58:13.559932       1 shared_informer.go:262] Caches are synced for garbage collector
	I0819 12:58:13.559969       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0819 12:58:13.585566       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [406a59bf08bf36fcbfba8b2e8dc4bd236db49e579023ababf33927ff5555d999] <==
	I0819 12:58:02.787831       1 node.go:163] Successfully retrieved node IP: 192.168.39.91
	I0819 12:58:02.787910       1 server_others.go:138] "Detected node IP" address="192.168.39.91"
	I0819 12:58:02.787970       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0819 12:58:02.817928       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0819 12:58:02.817958       1 server_others.go:206] "Using iptables Proxier"
	I0819 12:58:02.818382       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0819 12:58:02.819165       1 server.go:661] "Version info" version="v1.24.4"
	I0819 12:58:02.819194       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:58:02.820354       1 config.go:317] "Starting service config controller"
	I0819 12:58:02.820390       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0819 12:58:02.820411       1 config.go:226] "Starting endpoint slice config controller"
	I0819 12:58:02.820414       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0819 12:58:02.821413       1 config.go:444] "Starting node config controller"
	I0819 12:58:02.821438       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0819 12:58:02.921564       1 shared_informer.go:262] Caches are synced for node config
	I0819 12:58:02.921612       1 shared_informer.go:262] Caches are synced for service config
	I0819 12:58:02.921677       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8c3d20428d4a81721c370ec01f1b175e2c48d324c83cb7731e5025dd69cc67f1] <==
	I0819 12:57:57.534396       1 serving.go:348] Generated self-signed cert in-memory
	W0819 12:58:00.451302       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 12:58:00.451412       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 12:58:00.451446       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 12:58:00.451477       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 12:58:00.534474       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0819 12:58:00.534512       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:58:00.542319       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0819 12:58:00.543359       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 12:58:00.543437       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 12:58:00.543494       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0819 12:58:00.643639       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 12:58:00 test-preload-873920 kubelet[1140]: I0819 12:58:00.554705    1140 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-873920"
	Aug 19 12:58:00 test-preload-873920 kubelet[1140]: I0819 12:58:00.557205    1140 setters.go:532] "Node became not ready" node="test-preload-873920" condition={Type:Ready Status:False LastHeartbeatTime:2024-08-19 12:58:00.55710957 +0000 UTC m=+5.313332185 LastTransitionTime:2024-08-19 12:58:00.55710957 +0000 UTC m=+5.313332185 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: I0819 12:58:01.384568    1140 apiserver.go:52] "Watching apiserver"
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: I0819 12:58:01.391352    1140 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: I0819 12:58:01.392637    1140 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: I0819 12:58:01.393548    1140 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: E0819 12:58:01.394702    1140 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-qlqsm" podUID=79fc2b62-7024-4b19-86cd-5b61854a40b6
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: I0819 12:58:01.545326    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpdjw\" (UniqueName: \"kubernetes.io/projected/0bc4df34-4ca9-49a1-a67f-c7e09b83aa49-kube-api-access-fpdjw\") pod \"storage-provisioner\" (UID: \"0bc4df34-4ca9-49a1-a67f-c7e09b83aa49\") " pod="kube-system/storage-provisioner"
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: I0819 12:58:01.545379    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttfjl\" (UniqueName: \"kubernetes.io/projected/ca22e36b-6c46-4831-b058-fcc7b0068f76-kube-api-access-ttfjl\") pod \"kube-proxy-ljcfm\" (UID: \"ca22e36b-6c46-4831-b058-fcc7b0068f76\") " pod="kube-system/kube-proxy-ljcfm"
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: I0819 12:58:01.545405    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79fc2b62-7024-4b19-86cd-5b61854a40b6-config-volume\") pod \"coredns-6d4b75cb6d-qlqsm\" (UID: \"79fc2b62-7024-4b19-86cd-5b61854a40b6\") " pod="kube-system/coredns-6d4b75cb6d-qlqsm"
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: I0819 12:58:01.545424    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca22e36b-6c46-4831-b058-fcc7b0068f76-kube-proxy\") pod \"kube-proxy-ljcfm\" (UID: \"ca22e36b-6c46-4831-b058-fcc7b0068f76\") " pod="kube-system/kube-proxy-ljcfm"
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: I0819 12:58:01.545442    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0bc4df34-4ca9-49a1-a67f-c7e09b83aa49-tmp\") pod \"storage-provisioner\" (UID: \"0bc4df34-4ca9-49a1-a67f-c7e09b83aa49\") " pod="kube-system/storage-provisioner"
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: I0819 12:58:01.545468    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca22e36b-6c46-4831-b058-fcc7b0068f76-lib-modules\") pod \"kube-proxy-ljcfm\" (UID: \"ca22e36b-6c46-4831-b058-fcc7b0068f76\") " pod="kube-system/kube-proxy-ljcfm"
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: I0819 12:58:01.545494    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkwt6\" (UniqueName: \"kubernetes.io/projected/79fc2b62-7024-4b19-86cd-5b61854a40b6-kube-api-access-rkwt6\") pod \"coredns-6d4b75cb6d-qlqsm\" (UID: \"79fc2b62-7024-4b19-86cd-5b61854a40b6\") " pod="kube-system/coredns-6d4b75cb6d-qlqsm"
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: I0819 12:58:01.545517    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca22e36b-6c46-4831-b058-fcc7b0068f76-xtables-lock\") pod \"kube-proxy-ljcfm\" (UID: \"ca22e36b-6c46-4831-b058-fcc7b0068f76\") " pod="kube-system/kube-proxy-ljcfm"
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: I0819 12:58:01.545534    1140 reconciler.go:159] "Reconciler: start to sync state"
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: E0819 12:58:01.649883    1140 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 19 12:58:01 test-preload-873920 kubelet[1140]: E0819 12:58:01.650306    1140 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/79fc2b62-7024-4b19-86cd-5b61854a40b6-config-volume podName:79fc2b62-7024-4b19-86cd-5b61854a40b6 nodeName:}" failed. No retries permitted until 2024-08-19 12:58:02.150114586 +0000 UTC m=+6.906337202 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/79fc2b62-7024-4b19-86cd-5b61854a40b6-config-volume") pod "coredns-6d4b75cb6d-qlqsm" (UID: "79fc2b62-7024-4b19-86cd-5b61854a40b6") : object "kube-system"/"coredns" not registered
	Aug 19 12:58:02 test-preload-873920 kubelet[1140]: E0819 12:58:02.153503    1140 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 19 12:58:02 test-preload-873920 kubelet[1140]: E0819 12:58:02.153625    1140 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/79fc2b62-7024-4b19-86cd-5b61854a40b6-config-volume podName:79fc2b62-7024-4b19-86cd-5b61854a40b6 nodeName:}" failed. No retries permitted until 2024-08-19 12:58:03.153603629 +0000 UTC m=+7.909826244 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/79fc2b62-7024-4b19-86cd-5b61854a40b6-config-volume") pod "coredns-6d4b75cb6d-qlqsm" (UID: "79fc2b62-7024-4b19-86cd-5b61854a40b6") : object "kube-system"/"coredns" not registered
	Aug 19 12:58:03 test-preload-873920 kubelet[1140]: E0819 12:58:03.161820    1140 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 19 12:58:03 test-preload-873920 kubelet[1140]: E0819 12:58:03.161890    1140 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/79fc2b62-7024-4b19-86cd-5b61854a40b6-config-volume podName:79fc2b62-7024-4b19-86cd-5b61854a40b6 nodeName:}" failed. No retries permitted until 2024-08-19 12:58:05.161876468 +0000 UTC m=+9.918099095 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/79fc2b62-7024-4b19-86cd-5b61854a40b6-config-volume") pod "coredns-6d4b75cb6d-qlqsm" (UID: "79fc2b62-7024-4b19-86cd-5b61854a40b6") : object "kube-system"/"coredns" not registered
	Aug 19 12:58:03 test-preload-873920 kubelet[1140]: E0819 12:58:03.495562    1140 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-qlqsm" podUID=79fc2b62-7024-4b19-86cd-5b61854a40b6
	Aug 19 12:58:05 test-preload-873920 kubelet[1140]: E0819 12:58:05.178962    1140 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 19 12:58:05 test-preload-873920 kubelet[1140]: E0819 12:58:05.179021    1140 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/79fc2b62-7024-4b19-86cd-5b61854a40b6-config-volume podName:79fc2b62-7024-4b19-86cd-5b61854a40b6 nodeName:}" failed. No retries permitted until 2024-08-19 12:58:09.179006163 +0000 UTC m=+13.935228790 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/79fc2b62-7024-4b19-86cd-5b61854a40b6-config-volume") pod "coredns-6d4b75cb6d-qlqsm" (UID: "79fc2b62-7024-4b19-86cd-5b61854a40b6") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [50b02959257b5a534515c2df4a75d6bded17e3c29f68ed13f1adf5598803b2de] <==
	I0819 12:58:02.599349       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-873920 -n test-preload-873920
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-873920 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-873920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-873920
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-873920: (1.011494212s)
--- FAIL: TestPreload (170.97s)

                                                
                                    
x
+
TestKubernetesUpgrade (390.37s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-227337 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-227337 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m47.542325709s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-227337] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-227337" primary control-plane node in "kubernetes-upgrade-227337" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:02:04.618614 2132354 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:02:04.618732 2132354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:02:04.618741 2132354 out.go:358] Setting ErrFile to fd 2...
	I0819 13:02:04.618746 2132354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:02:04.618963 2132354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:02:04.619537 2132354 out.go:352] Setting JSON to false
	I0819 13:02:04.620616 2132354 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":164675,"bootTime":1723907850,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:02:04.620681 2132354 start.go:139] virtualization: kvm guest
	I0819 13:02:04.622792 2132354 out.go:177] * [kubernetes-upgrade-227337] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:02:04.624184 2132354 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:02:04.624194 2132354 notify.go:220] Checking for updates...
	I0819 13:02:04.626213 2132354 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:02:04.627456 2132354 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:02:04.628796 2132354 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:02:04.629916 2132354 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:02:04.631065 2132354 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:02:04.632655 2132354 config.go:182] Loaded profile config "NoKubernetes-237597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0819 13:02:04.632746 2132354 config.go:182] Loaded profile config "cert-expiration-405492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:02:04.632828 2132354 config.go:182] Loaded profile config "running-upgrade-038031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0819 13:02:04.632932 2132354 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:02:04.670771 2132354 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 13:02:04.672214 2132354 start.go:297] selected driver: kvm2
	I0819 13:02:04.672239 2132354 start.go:901] validating driver "kvm2" against <nil>
	I0819 13:02:04.672251 2132354 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:02:04.672982 2132354 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:02:04.673058 2132354 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:02:04.689938 2132354 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:02:04.689996 2132354 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 13:02:04.690271 2132354 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 13:02:04.690348 2132354 cni.go:84] Creating CNI manager for ""
	I0819 13:02:04.690367 2132354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:02:04.690378 2132354 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 13:02:04.690447 2132354 start.go:340] cluster config:
	{Name:kubernetes-upgrade-227337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-227337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:02:04.690584 2132354 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:02:04.692463 2132354 out.go:177] * Starting "kubernetes-upgrade-227337" primary control-plane node in "kubernetes-upgrade-227337" cluster
	I0819 13:02:04.693778 2132354 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 13:02:04.693821 2132354 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:02:04.693846 2132354 cache.go:56] Caching tarball of preloaded images
	I0819 13:02:04.693946 2132354 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:02:04.693959 2132354 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 13:02:04.694076 2132354 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/config.json ...
	I0819 13:02:04.694101 2132354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/config.json: {Name:mkd4ebb41ccd956ce5b96b1d1c15caba28fede03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:02:04.694442 2132354 start.go:360] acquireMachinesLock for kubernetes-upgrade-227337: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:02:19.681318 2132354 start.go:364] duration metric: took 14.986832754s to acquireMachinesLock for "kubernetes-upgrade-227337"
	I0819 13:02:19.681398 2132354 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-227337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-227337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:02:19.681553 2132354 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 13:02:19.713010 2132354 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 13:02:19.713241 2132354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:02:19.713290 2132354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:02:19.730835 2132354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42001
	I0819 13:02:19.731437 2132354 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:02:19.732103 2132354 main.go:141] libmachine: Using API Version  1
	I0819 13:02:19.732133 2132354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:02:19.732570 2132354 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:02:19.736414 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetMachineName
	I0819 13:02:19.736620 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .DriverName
	I0819 13:02:19.736812 2132354 start.go:159] libmachine.API.Create for "kubernetes-upgrade-227337" (driver="kvm2")
	I0819 13:02:19.736853 2132354 client.go:168] LocalClient.Create starting
	I0819 13:02:19.736898 2132354 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem
	I0819 13:02:19.736939 2132354 main.go:141] libmachine: Decoding PEM data...
	I0819 13:02:19.736969 2132354 main.go:141] libmachine: Parsing certificate...
	I0819 13:02:19.737046 2132354 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem
	I0819 13:02:19.737076 2132354 main.go:141] libmachine: Decoding PEM data...
	I0819 13:02:19.737093 2132354 main.go:141] libmachine: Parsing certificate...
	I0819 13:02:19.737116 2132354 main.go:141] libmachine: Running pre-create checks...
	I0819 13:02:19.737127 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .PreCreateCheck
	I0819 13:02:19.737524 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetConfigRaw
	I0819 13:02:19.738064 2132354 main.go:141] libmachine: Creating machine...
	I0819 13:02:19.738088 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .Create
	I0819 13:02:19.738235 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Creating KVM machine...
	I0819 13:02:19.739731 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found existing default KVM network
	I0819 13:02:19.741290 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:19.741076 2132533 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3e:dd:9e} reservation:<nil>}
	I0819 13:02:19.742860 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:19.742759 2132533 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00035e490}
	I0819 13:02:19.742888 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | created network xml: 
	I0819 13:02:19.742900 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | <network>
	I0819 13:02:19.742910 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG |   <name>mk-kubernetes-upgrade-227337</name>
	I0819 13:02:19.742929 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG |   <dns enable='no'/>
	I0819 13:02:19.742941 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG |   
	I0819 13:02:19.742951 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0819 13:02:19.742966 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG |     <dhcp>
	I0819 13:02:19.742978 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0819 13:02:19.742989 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG |     </dhcp>
	I0819 13:02:19.743000 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG |   </ip>
	I0819 13:02:19.743010 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG |   
	I0819 13:02:19.743037 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | </network>
	I0819 13:02:19.743056 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | 
	I0819 13:02:19.864032 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | trying to create private KVM network mk-kubernetes-upgrade-227337 192.168.50.0/24...
	I0819 13:02:19.947748 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | private KVM network mk-kubernetes-upgrade-227337 192.168.50.0/24 created
	I0819 13:02:19.947844 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Setting up store path in /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337 ...
	I0819 13:02:19.947878 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Building disk image from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 13:02:19.947895 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:19.947706 2132533 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:02:19.948049 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Downloading /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 13:02:20.347866 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:20.347742 2132533 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337/id_rsa...
	I0819 13:02:20.460363 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:20.460213 2132533 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337/kubernetes-upgrade-227337.rawdisk...
	I0819 13:02:20.460400 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Writing magic tar header
	I0819 13:02:20.460416 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Writing SSH key tar header
	I0819 13:02:20.460430 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:20.460335 2132533 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337 ...
	I0819 13:02:20.460446 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337
	I0819 13:02:20.460457 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines
	I0819 13:02:20.460484 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337 (perms=drwx------)
	I0819 13:02:20.460510 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines (perms=drwxr-xr-x)
	I0819 13:02:20.460523 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:02:20.460538 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube (perms=drwxr-xr-x)
	I0819 13:02:20.460556 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935 (perms=drwxrwxr-x)
	I0819 13:02:20.460568 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 13:02:20.460581 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 13:02:20.460595 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935
	I0819 13:02:20.460604 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Creating domain...
	I0819 13:02:20.460645 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 13:02:20.460677 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Checking permissions on dir: /home/jenkins
	I0819 13:02:20.460695 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Checking permissions on dir: /home
	I0819 13:02:20.460711 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Skipping /home - not owner
	I0819 13:02:20.461870 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) define libvirt domain using xml: 
	I0819 13:02:20.461888 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) <domain type='kvm'>
	I0819 13:02:20.461898 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)   <name>kubernetes-upgrade-227337</name>
	I0819 13:02:20.461906 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)   <memory unit='MiB'>2200</memory>
	I0819 13:02:20.461916 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)   <vcpu>2</vcpu>
	I0819 13:02:20.461923 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)   <features>
	I0819 13:02:20.461946 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     <acpi/>
	I0819 13:02:20.461961 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     <apic/>
	I0819 13:02:20.461972 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     <pae/>
	I0819 13:02:20.461984 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     
	I0819 13:02:20.462002 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)   </features>
	I0819 13:02:20.462009 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)   <cpu mode='host-passthrough'>
	I0819 13:02:20.462014 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)   
	I0819 13:02:20.462020 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)   </cpu>
	I0819 13:02:20.462025 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)   <os>
	I0819 13:02:20.462031 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     <type>hvm</type>
	I0819 13:02:20.462036 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     <boot dev='cdrom'/>
	I0819 13:02:20.462044 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     <boot dev='hd'/>
	I0819 13:02:20.462053 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     <bootmenu enable='no'/>
	I0819 13:02:20.462064 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)   </os>
	I0819 13:02:20.462087 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)   <devices>
	I0819 13:02:20.462105 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     <disk type='file' device='cdrom'>
	I0819 13:02:20.462115 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337/boot2docker.iso'/>
	I0819 13:02:20.462120 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)       <target dev='hdc' bus='scsi'/>
	I0819 13:02:20.462126 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)       <readonly/>
	I0819 13:02:20.462135 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     </disk>
	I0819 13:02:20.462143 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     <disk type='file' device='disk'>
	I0819 13:02:20.462156 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 13:02:20.462167 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337/kubernetes-upgrade-227337.rawdisk'/>
	I0819 13:02:20.462182 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)       <target dev='hda' bus='virtio'/>
	I0819 13:02:20.462197 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     </disk>
	I0819 13:02:20.462209 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     <interface type='network'>
	I0819 13:02:20.462221 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)       <source network='mk-kubernetes-upgrade-227337'/>
	I0819 13:02:20.462231 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)       <model type='virtio'/>
	I0819 13:02:20.462244 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     </interface>
	I0819 13:02:20.462262 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     <interface type='network'>
	I0819 13:02:20.462272 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)       <source network='default'/>
	I0819 13:02:20.462285 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)       <model type='virtio'/>
	I0819 13:02:20.462295 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     </interface>
	I0819 13:02:20.462305 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     <serial type='pty'>
	I0819 13:02:20.462316 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)       <target port='0'/>
	I0819 13:02:20.462351 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     </serial>
	I0819 13:02:20.462387 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     <console type='pty'>
	I0819 13:02:20.462403 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)       <target type='serial' port='0'/>
	I0819 13:02:20.462418 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     </console>
	I0819 13:02:20.462432 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     <rng model='virtio'>
	I0819 13:02:20.462444 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)       <backend model='random'>/dev/random</backend>
	I0819 13:02:20.462457 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     </rng>
	I0819 13:02:20.462472 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     
	I0819 13:02:20.462485 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)     
	I0819 13:02:20.462500 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337)   </devices>
	I0819 13:02:20.462512 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) </domain>
	I0819 13:02:20.462523 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) 
	I0819 13:02:20.557169 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:75:ce:01 in network default
	I0819 13:02:20.557908 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:20.557954 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Ensuring networks are active...
	I0819 13:02:20.558732 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Ensuring network default is active
	I0819 13:02:20.559152 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Ensuring network mk-kubernetes-upgrade-227337 is active
	I0819 13:02:20.565575 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Getting domain xml...
	I0819 13:02:20.566593 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Creating domain...
	I0819 13:02:22.336925 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Waiting to get IP...
	I0819 13:02:22.337894 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:22.338372 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | unable to find current IP address of domain kubernetes-upgrade-227337 in network mk-kubernetes-upgrade-227337
	I0819 13:02:22.338431 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:22.338351 2132533 retry.go:31] will retry after 277.424165ms: waiting for machine to come up
	I0819 13:02:22.619459 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:22.620038 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | unable to find current IP address of domain kubernetes-upgrade-227337 in network mk-kubernetes-upgrade-227337
	I0819 13:02:22.620067 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:22.619989 2132533 retry.go:31] will retry after 366.020927ms: waiting for machine to come up
	I0819 13:02:22.987448 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:22.988072 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | unable to find current IP address of domain kubernetes-upgrade-227337 in network mk-kubernetes-upgrade-227337
	I0819 13:02:22.988105 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:22.988012 2132533 retry.go:31] will retry after 331.780398ms: waiting for machine to come up
	I0819 13:02:23.321890 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:23.346280 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | unable to find current IP address of domain kubernetes-upgrade-227337 in network mk-kubernetes-upgrade-227337
	I0819 13:02:23.346322 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:23.346139 2132533 retry.go:31] will retry after 379.294696ms: waiting for machine to come up
	I0819 13:02:23.933714 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:23.940356 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | unable to find current IP address of domain kubernetes-upgrade-227337 in network mk-kubernetes-upgrade-227337
	I0819 13:02:23.940385 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:23.940294 2132533 retry.go:31] will retry after 603.685273ms: waiting for machine to come up
	I0819 13:02:24.545684 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:24.546250 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | unable to find current IP address of domain kubernetes-upgrade-227337 in network mk-kubernetes-upgrade-227337
	I0819 13:02:24.546281 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:24.546190 2132533 retry.go:31] will retry after 933.097125ms: waiting for machine to come up
	I0819 13:02:25.481037 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:25.481660 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | unable to find current IP address of domain kubernetes-upgrade-227337 in network mk-kubernetes-upgrade-227337
	I0819 13:02:25.481698 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:25.481606 2132533 retry.go:31] will retry after 1.181234469s: waiting for machine to come up
	I0819 13:02:26.665019 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:26.665377 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | unable to find current IP address of domain kubernetes-upgrade-227337 in network mk-kubernetes-upgrade-227337
	I0819 13:02:26.665402 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:26.665354 2132533 retry.go:31] will retry after 1.213780716s: waiting for machine to come up
	I0819 13:02:27.880705 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:27.881273 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | unable to find current IP address of domain kubernetes-upgrade-227337 in network mk-kubernetes-upgrade-227337
	I0819 13:02:27.881334 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:27.881225 2132533 retry.go:31] will retry after 1.661392969s: waiting for machine to come up
	I0819 13:02:29.545187 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:29.545641 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | unable to find current IP address of domain kubernetes-upgrade-227337 in network mk-kubernetes-upgrade-227337
	I0819 13:02:29.545669 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:29.545581 2132533 retry.go:31] will retry after 1.666721615s: waiting for machine to come up
	I0819 13:02:31.214512 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:31.215009 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | unable to find current IP address of domain kubernetes-upgrade-227337 in network mk-kubernetes-upgrade-227337
	I0819 13:02:31.215087 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:31.214987 2132533 retry.go:31] will retry after 2.158637766s: waiting for machine to come up
	I0819 13:02:33.376275 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:33.376724 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | unable to find current IP address of domain kubernetes-upgrade-227337 in network mk-kubernetes-upgrade-227337
	I0819 13:02:33.376749 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:33.376676 2132533 retry.go:31] will retry after 2.650230457s: waiting for machine to come up
	I0819 13:02:36.028824 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:36.029262 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | unable to find current IP address of domain kubernetes-upgrade-227337 in network mk-kubernetes-upgrade-227337
	I0819 13:02:36.029353 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:36.029234 2132533 retry.go:31] will retry after 4.119722077s: waiting for machine to come up
	I0819 13:02:40.150935 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:40.151385 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | unable to find current IP address of domain kubernetes-upgrade-227337 in network mk-kubernetes-upgrade-227337
	I0819 13:02:40.151413 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | I0819 13:02:40.151311 2132533 retry.go:31] will retry after 5.1104224s: waiting for machine to come up
	I0819 13:02:45.263770 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.264263 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Found IP for machine: 192.168.50.233
	I0819 13:02:45.264288 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Reserving static IP address...
	I0819 13:02:45.264303 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has current primary IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.264662 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-227337", mac: "52:54:00:3f:7e:a0", ip: "192.168.50.233"} in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.344869 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Getting to WaitForSSH function...
	I0819 13:02:45.344901 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Reserved static IP address: 192.168.50.233
	I0819 13:02:45.344916 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Waiting for SSH to be available...
	I0819 13:02:45.347580 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.348085 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:45.348118 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.348323 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Using SSH client type: external
	I0819 13:02:45.348358 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337/id_rsa (-rw-------)
	I0819 13:02:45.348396 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:02:45.348425 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | About to run SSH command:
	I0819 13:02:45.348443 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | exit 0
	I0819 13:02:45.468183 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | SSH cmd err, output: <nil>: 
	I0819 13:02:45.468468 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) KVM machine creation complete!
	I0819 13:02:45.468783 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetConfigRaw
	I0819 13:02:45.469325 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .DriverName
	I0819 13:02:45.469498 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .DriverName
	I0819 13:02:45.469643 2132354 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 13:02:45.469655 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetState
	I0819 13:02:45.470971 2132354 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 13:02:45.470987 2132354 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 13:02:45.471006 2132354 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 13:02:45.471015 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHHostname
	I0819 13:02:45.473434 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.473779 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:45.473811 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.473928 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHPort
	I0819 13:02:45.474111 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:45.474302 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:45.474479 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHUsername
	I0819 13:02:45.474669 2132354 main.go:141] libmachine: Using SSH client type: native
	I0819 13:02:45.474873 2132354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.233 22 <nil> <nil>}
	I0819 13:02:45.474885 2132354 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 13:02:45.571289 2132354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:02:45.571320 2132354 main.go:141] libmachine: Detecting the provisioner...
	I0819 13:02:45.571334 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHHostname
	I0819 13:02:45.574732 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.575137 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:45.575167 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.575339 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHPort
	I0819 13:02:45.575521 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:45.575708 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:45.575885 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHUsername
	I0819 13:02:45.576098 2132354 main.go:141] libmachine: Using SSH client type: native
	I0819 13:02:45.576302 2132354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.233 22 <nil> <nil>}
	I0819 13:02:45.576317 2132354 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 13:02:45.680822 2132354 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 13:02:45.680888 2132354 main.go:141] libmachine: found compatible host: buildroot
	I0819 13:02:45.680895 2132354 main.go:141] libmachine: Provisioning with buildroot...
	I0819 13:02:45.680903 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetMachineName
	I0819 13:02:45.681208 2132354 buildroot.go:166] provisioning hostname "kubernetes-upgrade-227337"
	I0819 13:02:45.681247 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetMachineName
	I0819 13:02:45.681462 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHHostname
	I0819 13:02:45.684370 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.684673 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:45.684709 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.684970 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHPort
	I0819 13:02:45.685171 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:45.685356 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:45.685488 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHUsername
	I0819 13:02:45.685641 2132354 main.go:141] libmachine: Using SSH client type: native
	I0819 13:02:45.685817 2132354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.233 22 <nil> <nil>}
	I0819 13:02:45.685830 2132354 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-227337 && echo "kubernetes-upgrade-227337" | sudo tee /etc/hostname
	I0819 13:02:45.798567 2132354 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-227337
	
	I0819 13:02:45.798598 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHHostname
	I0819 13:02:45.801662 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.802070 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:45.802098 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.802300 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHPort
	I0819 13:02:45.802498 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:45.802690 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:45.802867 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHUsername
	I0819 13:02:45.803059 2132354 main.go:141] libmachine: Using SSH client type: native
	I0819 13:02:45.803287 2132354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.233 22 <nil> <nil>}
	I0819 13:02:45.803308 2132354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-227337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-227337/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-227337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:02:45.909673 2132354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:02:45.909722 2132354 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:02:45.909753 2132354 buildroot.go:174] setting up certificates
	I0819 13:02:45.909765 2132354 provision.go:84] configureAuth start
	I0819 13:02:45.909776 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetMachineName
	I0819 13:02:45.910084 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetIP
	I0819 13:02:45.912859 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.913201 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:45.913236 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.913420 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHHostname
	I0819 13:02:45.915590 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.915904 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:45.915935 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:45.916106 2132354 provision.go:143] copyHostCerts
	I0819 13:02:45.916185 2132354 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:02:45.916210 2132354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:02:45.916268 2132354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:02:45.916361 2132354 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:02:45.916369 2132354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:02:45.916388 2132354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:02:45.916437 2132354 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:02:45.916444 2132354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:02:45.916461 2132354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:02:45.916507 2132354 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-227337 san=[127.0.0.1 192.168.50.233 kubernetes-upgrade-227337 localhost minikube]
	I0819 13:02:46.244909 2132354 provision.go:177] copyRemoteCerts
	I0819 13:02:46.244980 2132354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:02:46.245006 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHHostname
	I0819 13:02:46.247734 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.248063 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:46.248096 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.248292 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHPort
	I0819 13:02:46.248521 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:46.248697 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHUsername
	I0819 13:02:46.248839 2132354 sshutil.go:53] new ssh client: &{IP:192.168.50.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337/id_rsa Username:docker}
	I0819 13:02:46.326325 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:02:46.350963 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0819 13:02:46.375077 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 13:02:46.398860 2132354 provision.go:87] duration metric: took 489.076911ms to configureAuth
	I0819 13:02:46.398898 2132354 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:02:46.399118 2132354 config.go:182] Loaded profile config "kubernetes-upgrade-227337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 13:02:46.399264 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHHostname
	I0819 13:02:46.402220 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.402570 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:46.402611 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.402758 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHPort
	I0819 13:02:46.402965 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:46.403106 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:46.403279 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHUsername
	I0819 13:02:46.403450 2132354 main.go:141] libmachine: Using SSH client type: native
	I0819 13:02:46.403611 2132354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.233 22 <nil> <nil>}
	I0819 13:02:46.403626 2132354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:02:46.657538 2132354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:02:46.657580 2132354 main.go:141] libmachine: Checking connection to Docker...
	I0819 13:02:46.657591 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetURL
	I0819 13:02:46.659065 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Using libvirt version 6000000
	I0819 13:02:46.661154 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.661473 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:46.661506 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.661704 2132354 main.go:141] libmachine: Docker is up and running!
	I0819 13:02:46.661718 2132354 main.go:141] libmachine: Reticulating splines...
	I0819 13:02:46.661725 2132354 client.go:171] duration metric: took 26.924861077s to LocalClient.Create
	I0819 13:02:46.661751 2132354 start.go:167] duration metric: took 26.924942905s to libmachine.API.Create "kubernetes-upgrade-227337"
	I0819 13:02:46.661763 2132354 start.go:293] postStartSetup for "kubernetes-upgrade-227337" (driver="kvm2")
	I0819 13:02:46.661775 2132354 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:02:46.661798 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .DriverName
	I0819 13:02:46.662060 2132354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:02:46.662087 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHHostname
	I0819 13:02:46.664387 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.664757 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:46.664801 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.664939 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHPort
	I0819 13:02:46.665153 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:46.665323 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHUsername
	I0819 13:02:46.665455 2132354 sshutil.go:53] new ssh client: &{IP:192.168.50.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337/id_rsa Username:docker}
	I0819 13:02:46.742574 2132354 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:02:46.746724 2132354 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:02:46.746745 2132354 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:02:46.746818 2132354 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:02:46.746919 2132354 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:02:46.747033 2132354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:02:46.755937 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:02:46.779503 2132354 start.go:296] duration metric: took 117.72706ms for postStartSetup
	I0819 13:02:46.779572 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetConfigRaw
	I0819 13:02:46.780150 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetIP
	I0819 13:02:46.782998 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.783393 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:46.783423 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.783663 2132354 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/config.json ...
	I0819 13:02:46.783854 2132354 start.go:128] duration metric: took 27.102288275s to createHost
	I0819 13:02:46.783878 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHHostname
	I0819 13:02:46.786301 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.786645 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:46.786685 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.786834 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHPort
	I0819 13:02:46.787064 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:46.787264 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:46.787449 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHUsername
	I0819 13:02:46.787637 2132354 main.go:141] libmachine: Using SSH client type: native
	I0819 13:02:46.787838 2132354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.233 22 <nil> <nil>}
	I0819 13:02:46.787849 2132354 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:02:46.884827 2132354 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724072566.837486441
	
	I0819 13:02:46.884851 2132354 fix.go:216] guest clock: 1724072566.837486441
	I0819 13:02:46.884859 2132354 fix.go:229] Guest: 2024-08-19 13:02:46.837486441 +0000 UTC Remote: 2024-08-19 13:02:46.783866168 +0000 UTC m=+42.202800115 (delta=53.620273ms)
	I0819 13:02:46.884878 2132354 fix.go:200] guest clock delta is within tolerance: 53.620273ms
	I0819 13:02:46.884883 2132354 start.go:83] releasing machines lock for "kubernetes-upgrade-227337", held for 27.203526158s
	I0819 13:02:46.884915 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .DriverName
	I0819 13:02:46.885211 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetIP
	I0819 13:02:46.887943 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.888427 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:46.888456 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.888628 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .DriverName
	I0819 13:02:46.889256 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .DriverName
	I0819 13:02:46.889469 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .DriverName
	I0819 13:02:46.889564 2132354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:02:46.889621 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHHostname
	I0819 13:02:46.889697 2132354 ssh_runner.go:195] Run: cat /version.json
	I0819 13:02:46.889715 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHHostname
	I0819 13:02:46.892557 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.892834 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:46.892859 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.892879 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.893184 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHPort
	I0819 13:02:46.893361 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:46.893364 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:46.893397 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:46.893544 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHPort
	I0819 13:02:46.893585 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHUsername
	I0819 13:02:46.893754 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:02:46.893759 2132354 sshutil.go:53] new ssh client: &{IP:192.168.50.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337/id_rsa Username:docker}
	I0819 13:02:46.893918 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHUsername
	I0819 13:02:46.894078 2132354 sshutil.go:53] new ssh client: &{IP:192.168.50.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337/id_rsa Username:docker}
	I0819 13:02:46.972983 2132354 ssh_runner.go:195] Run: systemctl --version
	I0819 13:02:46.993892 2132354 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:02:47.151862 2132354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:02:47.158558 2132354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:02:47.158638 2132354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:02:47.175739 2132354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:02:47.175770 2132354 start.go:495] detecting cgroup driver to use...
	I0819 13:02:47.175836 2132354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:02:47.197268 2132354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:02:47.213959 2132354 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:02:47.214025 2132354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:02:47.227766 2132354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:02:47.241570 2132354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:02:47.360274 2132354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:02:47.500786 2132354 docker.go:233] disabling docker service ...
	I0819 13:02:47.500861 2132354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:02:47.517498 2132354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:02:47.532099 2132354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:02:47.689918 2132354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:02:47.820429 2132354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:02:47.834309 2132354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:02:47.852741 2132354 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 13:02:47.852816 2132354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:02:47.864002 2132354 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:02:47.864081 2132354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:02:47.877958 2132354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:02:47.891808 2132354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:02:47.902102 2132354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:02:47.912676 2132354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:02:47.922089 2132354 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:02:47.922180 2132354 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:02:47.935761 2132354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:02:47.945110 2132354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:02:48.075789 2132354 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:02:48.208256 2132354 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:02:48.208355 2132354 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:02:48.213546 2132354 start.go:563] Will wait 60s for crictl version
	I0819 13:02:48.213608 2132354 ssh_runner.go:195] Run: which crictl
	I0819 13:02:48.217482 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:02:48.260391 2132354 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:02:48.260494 2132354 ssh_runner.go:195] Run: crio --version
	I0819 13:02:48.294466 2132354 ssh_runner.go:195] Run: crio --version
	I0819 13:02:48.328718 2132354 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 13:02:48.329824 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetIP
	I0819 13:02:48.332591 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:48.332949 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:02:35 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:02:48.332979 2132354 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:02:48.333244 2132354 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 13:02:48.337654 2132354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:02:48.350352 2132354 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-227337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-227337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.233 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:02:48.350489 2132354 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 13:02:48.350579 2132354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:02:48.388835 2132354 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:02:48.388920 2132354 ssh_runner.go:195] Run: which lz4
	I0819 13:02:48.392898 2132354 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:02:48.396960 2132354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:02:48.396987 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 13:02:50.098902 2132354 crio.go:462] duration metric: took 1.706031208s to copy over tarball
	I0819 13:02:50.099045 2132354 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:02:52.823990 2132354 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.724900241s)
	I0819 13:02:52.824026 2132354 crio.go:469] duration metric: took 2.725088542s to extract the tarball
	I0819 13:02:52.824036 2132354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:02:52.884655 2132354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:02:52.937968 2132354 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:02:52.938011 2132354 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:02:52.938101 2132354 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:02:52.938411 2132354 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:02:52.938438 2132354 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:02:52.938583 2132354 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 13:02:52.938641 2132354 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:02:52.938802 2132354 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:02:52.938806 2132354 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:02:52.938418 2132354 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 13:02:52.940043 2132354 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:02:52.940068 2132354 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:02:52.940091 2132354 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:02:52.940182 2132354 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:02:52.940213 2132354 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:02:52.940340 2132354 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 13:02:52.940402 2132354 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:02:52.940458 2132354 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 13:02:53.102334 2132354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 13:02:53.107497 2132354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:02:53.113517 2132354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:02:53.117191 2132354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 13:02:53.120060 2132354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:02:53.126260 2132354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 13:02:53.135925 2132354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:02:53.202670 2132354 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 13:02:53.202732 2132354 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:02:53.202786 2132354 ssh_runner.go:195] Run: which crictl
	I0819 13:02:53.275515 2132354 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 13:02:53.275580 2132354 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:02:53.275636 2132354 ssh_runner.go:195] Run: which crictl
	I0819 13:02:53.292944 2132354 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 13:02:53.292994 2132354 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:02:53.293049 2132354 ssh_runner.go:195] Run: which crictl
	I0819 13:02:53.318125 2132354 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 13:02:53.318185 2132354 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:02:53.318230 2132354 ssh_runner.go:195] Run: which crictl
	I0819 13:02:53.318136 2132354 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 13:02:53.318327 2132354 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 13:02:53.318397 2132354 ssh_runner.go:195] Run: which crictl
	I0819 13:02:53.318439 2132354 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 13:02:53.318470 2132354 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 13:02:53.318505 2132354 ssh_runner.go:195] Run: which crictl
	I0819 13:02:53.320515 2132354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:02:53.333470 2132354 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 13:02:53.333520 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:02:53.333538 2132354 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:02:53.333594 2132354 ssh_runner.go:195] Run: which crictl
	I0819 13:02:53.333603 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:02:53.333638 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:02:53.334255 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:02:53.334311 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:02:53.334347 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:02:53.600210 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:02:53.600374 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:02:53.600482 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:02:53.600578 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:02:53.600672 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:02:53.600762 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:02:53.600849 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:02:53.787940 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:02:53.792269 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:02:53.792304 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:02:53.792460 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:02:53.792523 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:02:53.792570 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:02:53.792625 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:02:53.934712 2132354 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 13:02:54.004777 2132354 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 13:02:54.004919 2132354 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 13:02:54.004987 2132354 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 13:02:54.005037 2132354 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 13:02:54.005079 2132354 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 13:02:54.005365 2132354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:02:54.060811 2132354 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 13:02:54.060883 2132354 cache_images.go:92] duration metric: took 1.122855s to LoadCachedImages
	W0819 13:02:54.060967 2132354 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0819 13:02:54.060979 2132354 kubeadm.go:934] updating node { 192.168.50.233 8443 v1.20.0 crio true true} ...
	I0819 13:02:54.061106 2132354 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-227337 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-227337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:02:54.061175 2132354 ssh_runner.go:195] Run: crio config
	I0819 13:02:54.135530 2132354 cni.go:84] Creating CNI manager for ""
	I0819 13:02:54.135569 2132354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:02:54.135588 2132354 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:02:54.135618 2132354 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.233 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-227337 NodeName:kubernetes-upgrade-227337 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 13:02:54.135816 2132354 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.233
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-227337"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:02:54.135902 2132354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 13:02:54.154410 2132354 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:02:54.154504 2132354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:02:54.166371 2132354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0819 13:02:54.190921 2132354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:02:54.219940 2132354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0819 13:02:54.244339 2132354 ssh_runner.go:195] Run: grep 192.168.50.233	control-plane.minikube.internal$ /etc/hosts
	I0819 13:02:54.250472 2132354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:02:54.268004 2132354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:02:54.431130 2132354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:02:54.460586 2132354 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337 for IP: 192.168.50.233
	I0819 13:02:54.460623 2132354 certs.go:194] generating shared ca certs ...
	I0819 13:02:54.460669 2132354 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:02:54.460868 2132354 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:02:54.460924 2132354 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:02:54.460936 2132354 certs.go:256] generating profile certs ...
	I0819 13:02:54.461037 2132354 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/client.key
	I0819 13:02:54.461079 2132354 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/client.crt with IP's: []
	I0819 13:02:54.681509 2132354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/client.crt ...
	I0819 13:02:54.681547 2132354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/client.crt: {Name:mk8fb1840cff4c49ae4c877055e0e023ad2139e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:02:54.711982 2132354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/client.key ...
	I0819 13:02:54.712060 2132354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/client.key: {Name:mka23993eb0a260d2a948ab1a818888a6934d05c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:02:54.712269 2132354 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/apiserver.key.15d11da9
	I0819 13:02:54.712306 2132354 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/apiserver.crt.15d11da9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.233]
	I0819 13:02:54.975587 2132354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/apiserver.crt.15d11da9 ...
	I0819 13:02:54.975629 2132354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/apiserver.crt.15d11da9: {Name:mk97ac878f2fe0fabacb235e8df10b70c96650cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:02:54.975848 2132354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/apiserver.key.15d11da9 ...
	I0819 13:02:54.975870 2132354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/apiserver.key.15d11da9: {Name:mkb59c75645df1167c7ed14fd5fdfc8c9bac4b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:02:54.976030 2132354 certs.go:381] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/apiserver.crt.15d11da9 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/apiserver.crt
	I0819 13:02:54.976154 2132354 certs.go:385] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/apiserver.key.15d11da9 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/apiserver.key
	I0819 13:02:54.976271 2132354 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/proxy-client.key
	I0819 13:02:54.976299 2132354 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/proxy-client.crt with IP's: []
	I0819 13:02:55.119252 2132354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/proxy-client.crt ...
	I0819 13:02:55.119291 2132354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/proxy-client.crt: {Name:mk49c0581f95bec91c6d50af3c16c8b5bead47d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:02:55.119497 2132354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/proxy-client.key ...
	I0819 13:02:55.119512 2132354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/proxy-client.key: {Name:mk4214a2126dd93e4822387df803d44c813cbaa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:02:55.119726 2132354 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:02:55.119771 2132354 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:02:55.119788 2132354 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:02:55.119821 2132354 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:02:55.119847 2132354 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:02:55.119876 2132354 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:02:55.119923 2132354 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:02:55.120628 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:02:55.178104 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:02:55.220968 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:02:55.254407 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:02:55.287006 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 13:02:55.320041 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:02:55.358356 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:02:55.398345 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:02:55.432762 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:02:55.469247 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:02:55.504449 2132354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:02:55.554108 2132354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:02:55.584749 2132354 ssh_runner.go:195] Run: openssl version
	I0819 13:02:55.596607 2132354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:02:55.619370 2132354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:02:55.634017 2132354 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:02:55.634089 2132354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:02:55.643899 2132354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:02:55.659700 2132354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:02:55.674614 2132354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:02:55.681457 2132354 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:02:55.681549 2132354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:02:55.688629 2132354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:02:55.703652 2132354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:02:55.718686 2132354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:02:55.725436 2132354 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:02:55.725505 2132354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:02:55.736384 2132354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:02:55.752804 2132354 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:02:55.760066 2132354 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 13:02:55.760138 2132354 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-227337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-227337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.233 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:02:55.760279 2132354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:02:55.760336 2132354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:02:55.818677 2132354 cri.go:89] found id: ""
	I0819 13:02:55.818763 2132354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:02:55.832710 2132354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:02:55.845126 2132354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:02:55.856389 2132354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:02:55.856413 2132354 kubeadm.go:157] found existing configuration files:
	
	I0819 13:02:55.856470 2132354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:02:55.870512 2132354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:02:55.870606 2132354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:02:55.884830 2132354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:02:55.899015 2132354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:02:55.899095 2132354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:02:55.913817 2132354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:02:55.928100 2132354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:02:55.928212 2132354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:02:55.939341 2132354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:02:55.950014 2132354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:02:55.950096 2132354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:02:55.963658 2132354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:02:56.112401 2132354 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:02:56.112460 2132354 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:02:56.296588 2132354 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:02:56.296741 2132354 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:02:56.296874 2132354 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:02:56.566429 2132354 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:02:56.677706 2132354 out.go:235]   - Generating certificates and keys ...
	I0819 13:02:56.677870 2132354 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:02:56.677997 2132354 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:02:56.721911 2132354 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 13:02:57.142096 2132354 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 13:02:57.358964 2132354 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 13:02:57.810024 2132354 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 13:02:57.915033 2132354 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 13:02:57.915263 2132354 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-227337 localhost] and IPs [192.168.50.233 127.0.0.1 ::1]
	I0819 13:02:57.981649 2132354 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 13:02:57.981886 2132354 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-227337 localhost] and IPs [192.168.50.233 127.0.0.1 ::1]
	I0819 13:02:58.117290 2132354 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 13:02:58.182457 2132354 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 13:02:58.436486 2132354 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 13:02:58.439072 2132354 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:02:58.593538 2132354 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:02:58.740087 2132354 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:02:59.015002 2132354 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:02:59.363009 2132354 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:02:59.386129 2132354 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:02:59.387307 2132354 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:02:59.387500 2132354 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:02:59.588402 2132354 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:02:59.590390 2132354 out.go:235]   - Booting up control plane ...
	I0819 13:02:59.590622 2132354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:02:59.597758 2132354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:02:59.599885 2132354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:02:59.601122 2132354 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:02:59.612643 2132354 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:03:39.573117 2132354 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:03:39.573668 2132354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:03:39.573988 2132354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:03:44.573421 2132354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:03:44.573603 2132354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:03:54.572571 2132354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:03:54.572831 2132354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:04:14.572418 2132354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:04:14.572682 2132354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:04:54.571373 2132354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:04:54.571645 2132354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:04:54.571679 2132354 kubeadm.go:310] 
	I0819 13:04:54.571748 2132354 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:04:54.571819 2132354 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:04:54.571837 2132354 kubeadm.go:310] 
	I0819 13:04:54.571880 2132354 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:04:54.571928 2132354 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:04:54.572044 2132354 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:04:54.572055 2132354 kubeadm.go:310] 
	I0819 13:04:54.572202 2132354 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:04:54.572251 2132354 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:04:54.572299 2132354 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:04:54.572308 2132354 kubeadm.go:310] 
	I0819 13:04:54.572438 2132354 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:04:54.572543 2132354 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:04:54.572559 2132354 kubeadm.go:310] 
	I0819 13:04:54.572708 2132354 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:04:54.572824 2132354 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:04:54.572931 2132354 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:04:54.573015 2132354 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:04:54.573027 2132354 kubeadm.go:310] 
	I0819 13:04:54.573860 2132354 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:04:54.573964 2132354 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:04:54.574055 2132354 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 13:04:54.574216 2132354 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-227337 localhost] and IPs [192.168.50.233 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-227337 localhost] and IPs [192.168.50.233 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-227337 localhost] and IPs [192.168.50.233 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-227337 localhost] and IPs [192.168.50.233 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 13:04:54.574263 2132354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:04:55.178652 2132354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:04:55.192113 2132354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:04:55.202784 2132354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:04:55.202811 2132354 kubeadm.go:157] found existing configuration files:
	
	I0819 13:04:55.202869 2132354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:04:55.212261 2132354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:04:55.212327 2132354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:04:55.221797 2132354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:04:55.231229 2132354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:04:55.231271 2132354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:04:55.240650 2132354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:04:55.249780 2132354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:04:55.249833 2132354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:04:55.258611 2132354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:04:55.266983 2132354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:04:55.267031 2132354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:04:55.275592 2132354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:04:55.338610 2132354 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:04:55.338726 2132354 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:04:55.473959 2132354 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:04:55.474064 2132354 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:04:55.474152 2132354 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:04:55.659721 2132354 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:04:55.661609 2132354 out.go:235]   - Generating certificates and keys ...
	I0819 13:04:55.661687 2132354 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:04:55.661747 2132354 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:04:55.661861 2132354 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:04:55.661961 2132354 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:04:55.662068 2132354 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:04:55.662140 2132354 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:04:55.662232 2132354 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:04:55.662319 2132354 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:04:55.662417 2132354 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:04:55.662529 2132354 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:04:55.662588 2132354 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:04:55.662689 2132354 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:04:55.734722 2132354 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:04:55.931375 2132354 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:04:56.082205 2132354 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:04:56.207794 2132354 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:04:56.221905 2132354 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:04:56.223052 2132354 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:04:56.223125 2132354 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:04:56.353685 2132354 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:04:56.355446 2132354 out.go:235]   - Booting up control plane ...
	I0819 13:04:56.355555 2132354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:04:56.357350 2132354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:04:56.359274 2132354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:04:56.360113 2132354 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:04:56.362063 2132354 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:05:36.364148 2132354 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:05:36.364292 2132354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:05:36.364491 2132354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:05:41.365316 2132354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:05:41.365575 2132354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:05:51.366748 2132354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:05:51.367038 2132354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:06:11.368319 2132354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:06:11.368586 2132354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:06:51.368149 2132354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:06:51.368426 2132354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:06:51.368438 2132354 kubeadm.go:310] 
	I0819 13:06:51.368487 2132354 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:06:51.368541 2132354 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:06:51.368548 2132354 kubeadm.go:310] 
	I0819 13:06:51.368596 2132354 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:06:51.368633 2132354 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:06:51.368766 2132354 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:06:51.368771 2132354 kubeadm.go:310] 
	I0819 13:06:51.368854 2132354 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:06:51.368882 2132354 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:06:51.368918 2132354 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:06:51.368925 2132354 kubeadm.go:310] 
	I0819 13:06:51.369036 2132354 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:06:51.369103 2132354 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:06:51.369107 2132354 kubeadm.go:310] 
	I0819 13:06:51.369197 2132354 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:06:51.369270 2132354 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:06:51.369332 2132354 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:06:51.369389 2132354 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:06:51.369393 2132354 kubeadm.go:310] 
	I0819 13:06:51.371208 2132354 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:06:51.371347 2132354 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:06:51.371526 2132354 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 13:06:51.371528 2132354 kubeadm.go:394] duration metric: took 3m55.611393981s to StartCluster
	I0819 13:06:51.371624 2132354 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:06:51.371693 2132354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:06:51.418065 2132354 cri.go:89] found id: ""
	I0819 13:06:51.418106 2132354 logs.go:276] 0 containers: []
	W0819 13:06:51.418118 2132354 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:06:51.418127 2132354 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:06:51.418190 2132354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:06:51.463083 2132354 cri.go:89] found id: ""
	I0819 13:06:51.463121 2132354 logs.go:276] 0 containers: []
	W0819 13:06:51.463133 2132354 logs.go:278] No container was found matching "etcd"
	I0819 13:06:51.463142 2132354 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:06:51.463216 2132354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:06:51.502030 2132354 cri.go:89] found id: ""
	I0819 13:06:51.502066 2132354 logs.go:276] 0 containers: []
	W0819 13:06:51.502077 2132354 logs.go:278] No container was found matching "coredns"
	I0819 13:06:51.502085 2132354 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:06:51.502165 2132354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:06:51.542558 2132354 cri.go:89] found id: ""
	I0819 13:06:51.542597 2132354 logs.go:276] 0 containers: []
	W0819 13:06:51.542610 2132354 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:06:51.542618 2132354 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:06:51.542687 2132354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:06:51.581012 2132354 cri.go:89] found id: ""
	I0819 13:06:51.581051 2132354 logs.go:276] 0 containers: []
	W0819 13:06:51.581063 2132354 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:06:51.581071 2132354 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:06:51.581148 2132354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:06:51.617807 2132354 cri.go:89] found id: ""
	I0819 13:06:51.617846 2132354 logs.go:276] 0 containers: []
	W0819 13:06:51.617856 2132354 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:06:51.617863 2132354 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:06:51.617933 2132354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:06:51.653399 2132354 cri.go:89] found id: ""
	I0819 13:06:51.653444 2132354 logs.go:276] 0 containers: []
	W0819 13:06:51.653456 2132354 logs.go:278] No container was found matching "kindnet"
	I0819 13:06:51.653473 2132354 logs.go:123] Gathering logs for dmesg ...
	I0819 13:06:51.653497 2132354 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:06:51.671975 2132354 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:06:51.672018 2132354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:06:51.850082 2132354 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:06:51.850118 2132354 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:06:51.850135 2132354 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:06:51.976124 2132354 logs.go:123] Gathering logs for container status ...
	I0819 13:06:51.976182 2132354 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:06:52.023420 2132354 logs.go:123] Gathering logs for kubelet ...
	I0819 13:06:52.023463 2132354 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 13:06:52.105329 2132354 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 13:06:52.105442 2132354 out.go:270] * 
	* 
	W0819 13:06:52.105518 2132354 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:06:52.105535 2132354 out.go:270] * 
	* 
	W0819 13:06:52.106836 2132354 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:06:52.110242 2132354 out.go:201] 
	W0819 13:06:52.111403 2132354 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:06:52.111461 2132354 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 13:06:52.111487 2132354 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 13:06:52.112786 2132354 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-227337 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-227337
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-227337: (1.471830968s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-227337 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-227337 status --format={{.Host}}: exit status 7 (68.482092ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-227337 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-227337 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.942364426s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-227337 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-227337 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-227337 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (96.25203ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-227337] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-227337
	    minikube start -p kubernetes-upgrade-227337 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2273372 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-227337 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-227337 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-227337 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (35.746734154s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-19 13:08:31.57996579 +0000 UTC m=+4312.214756065
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-227337 -n kubernetes-upgrade-227337
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-227337 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-227337 logs -n 25: (1.636112955s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-698949                                      | pause-698949              | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	| ssh     | -p kindnet-649777 sudo                               | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo cat                           | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo cat                           | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| start   | -p custom-flannel-649777                             | custom-flannel-649777     | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                           |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo                               | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo                               | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo cat                           | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo docker                        | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo                               | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo                               | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo cat                           | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo cat                           | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo                               | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo                               | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo                               | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo cat                           | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo cat                           | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo                               | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo                               | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo                               | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo find                          | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo crio                          | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-649777                                    | kindnet-649777            | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	| start   | -p enable-default-cni-649777                         | enable-default-cni-649777 | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 13:08:26
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 13:08:26.408347 2140327 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:08:26.408482 2140327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:08:26.408494 2140327 out.go:358] Setting ErrFile to fd 2...
	I0819 13:08:26.408500 2140327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:08:26.408667 2140327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:08:26.409230 2140327 out.go:352] Setting JSON to false
	I0819 13:08:26.410342 2140327 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":165056,"bootTime":1723907850,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:08:26.410397 2140327 start.go:139] virtualization: kvm guest
	I0819 13:08:26.412416 2140327 out.go:177] * [enable-default-cni-649777] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:08:26.413493 2140327 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:08:26.413544 2140327 notify.go:220] Checking for updates...
	I0819 13:08:26.415415 2140327 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:08:26.416442 2140327 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:08:26.417461 2140327 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:08:26.418526 2140327 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:08:26.419528 2140327 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:08:26.421210 2140327 config.go:182] Loaded profile config "calico-649777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:08:26.421356 2140327 config.go:182] Loaded profile config "custom-flannel-649777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:08:26.421482 2140327 config.go:182] Loaded profile config "kubernetes-upgrade-227337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:08:26.421585 2140327 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:08:26.458617 2140327 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 13:08:26.459816 2140327 start.go:297] selected driver: kvm2
	I0819 13:08:26.459835 2140327 start.go:901] validating driver "kvm2" against <nil>
	I0819 13:08:26.459851 2140327 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:08:26.460626 2140327 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:08:26.460740 2140327 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:08:26.477073 2140327 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:08:26.477119 2140327 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0819 13:08:26.477342 2140327 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0819 13:08:26.477367 2140327 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:08:26.477411 2140327 cni.go:84] Creating CNI manager for "bridge"
	I0819 13:08:26.477424 2140327 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 13:08:26.477489 2140327 start.go:340] cluster config:
	{Name:enable-default-cni-649777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-649777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:08:26.477611 2140327 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:08:26.479496 2140327 out.go:177] * Starting "enable-default-cni-649777" primary control-plane node in "enable-default-cni-649777" cluster
	I0819 13:08:26.089541 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:26.090178 2138610 main.go:141] libmachine: (calico-649777) DBG | unable to find current IP address of domain calico-649777 in network mk-calico-649777
	I0819 13:08:26.090217 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:26.090144 2138633 retry.go:31] will retry after 4.474058838s: waiting for machine to come up
	I0819 13:08:26.110244 2137182 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:08:26.130317 2137182 api_server.go:72] duration metric: took 1.021083869s to wait for apiserver process to appear ...
	I0819 13:08:26.130346 2137182 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:08:26.130394 2137182 api_server.go:253] Checking apiserver healthz at https://192.168.50.233:8443/healthz ...
	I0819 13:08:28.708539 2137182 api_server.go:279] https://192.168.50.233:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:08:28.708589 2137182 api_server.go:103] status: https://192.168.50.233:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:08:28.708604 2137182 api_server.go:253] Checking apiserver healthz at https://192.168.50.233:8443/healthz ...
	I0819 13:08:28.780318 2137182 api_server.go:279] https://192.168.50.233:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:08:28.780353 2137182 api_server.go:103] status: https://192.168.50.233:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:08:29.130799 2137182 api_server.go:253] Checking apiserver healthz at https://192.168.50.233:8443/healthz ...
	I0819 13:08:29.137644 2137182 api_server.go:279] https://192.168.50.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:08:29.137672 2137182 api_server.go:103] status: https://192.168.50.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:08:29.631269 2137182 api_server.go:253] Checking apiserver healthz at https://192.168.50.233:8443/healthz ...
	I0819 13:08:29.636831 2137182 api_server.go:279] https://192.168.50.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:08:29.636879 2137182 api_server.go:103] status: https://192.168.50.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:08:30.131426 2137182 api_server.go:253] Checking apiserver healthz at https://192.168.50.233:8443/healthz ...
	I0819 13:08:30.135510 2137182 api_server.go:279] https://192.168.50.233:8443/healthz returned 200:
	ok
	I0819 13:08:30.141887 2137182 api_server.go:141] control plane version: v1.31.0
	I0819 13:08:30.141912 2137182 api_server.go:131] duration metric: took 4.01155906s to wait for apiserver health ...
	I0819 13:08:30.141923 2137182 cni.go:84] Creating CNI manager for ""
	I0819 13:08:30.141932 2137182 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:08:30.143537 2137182 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:08:30.144716 2137182 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:08:30.155863 2137182 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:08:30.173534 2137182 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:08:30.173606 2137182 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 13:08:30.173627 2137182 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 13:08:30.182762 2137182 system_pods.go:59] 8 kube-system pods found
	I0819 13:08:30.182783 2137182 system_pods.go:61] "coredns-6f6b679f8f-2mlp2" [41720865-5a4e-41cb-9a04-1b99e0e3c20e] Running
	I0819 13:08:30.182788 2137182 system_pods.go:61] "coredns-6f6b679f8f-wgkgj" [9f84103a-ef6d-4427-b63d-266586be5095] Running
	I0819 13:08:30.182796 2137182 system_pods.go:61] "etcd-kubernetes-upgrade-227337" [481c391c-43ad-4c06-85fe-2bca48632fe3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:08:30.182805 2137182 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-227337" [2d4c3d1e-51b4-4b11-9bd2-0018977a3fb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:08:30.182815 2137182 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-227337" [9977c7d2-53b1-4c40-a190-de82678c97db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:08:30.182820 2137182 system_pods.go:61] "kube-proxy-mfqrj" [36f996b2-55d5-487f-bd63-eb4aab5db2ac] Running
	I0819 13:08:30.182826 2137182 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-227337" [11f32a9f-3b17-4b53-9645-5f7b217f349a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:08:30.182833 2137182 system_pods.go:61] "storage-provisioner" [20d68ae1-b428-4386-9314-7c370460a5c1] Running
	I0819 13:08:30.182840 2137182 system_pods.go:74] duration metric: took 9.290864ms to wait for pod list to return data ...
	I0819 13:08:30.182848 2137182 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:08:30.185810 2137182 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:08:30.185832 2137182 node_conditions.go:123] node cpu capacity is 2
	I0819 13:08:30.185843 2137182 node_conditions.go:105] duration metric: took 2.989426ms to run NodePressure ...
	I0819 13:08:30.185858 2137182 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:08:30.484493 2137182 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:08:30.496128 2137182 ops.go:34] apiserver oom_adj: -16
	I0819 13:08:30.496155 2137182 kubeadm.go:597] duration metric: took 17.837148881s to restartPrimaryControlPlane
	I0819 13:08:30.496179 2137182 kubeadm.go:394] duration metric: took 18.454741599s to StartCluster
	I0819 13:08:30.496203 2137182 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:08:30.496308 2137182 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:08:30.496960 2137182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:08:30.497216 2137182 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.233 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:08:30.497280 2137182 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:08:30.497368 2137182 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-227337"
	I0819 13:08:30.497398 2137182 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-227337"
	W0819 13:08:30.497405 2137182 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:08:30.497405 2137182 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-227337"
	I0819 13:08:30.497438 2137182 host.go:66] Checking if "kubernetes-upgrade-227337" exists ...
	I0819 13:08:30.497459 2137182 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-227337"
	I0819 13:08:30.497459 2137182 config.go:182] Loaded profile config "kubernetes-upgrade-227337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:08:30.497786 2137182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:08:30.497821 2137182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:08:30.497873 2137182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:08:30.497906 2137182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:08:30.498761 2137182 out.go:177] * Verifying Kubernetes components...
	I0819 13:08:30.500029 2137182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:08:30.513603 2137182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45185
	I0819 13:08:30.513639 2137182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39467
	I0819 13:08:30.514035 2137182 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:08:30.514061 2137182 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:08:30.514476 2137182 main.go:141] libmachine: Using API Version  1
	I0819 13:08:30.514490 2137182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:08:30.514622 2137182 main.go:141] libmachine: Using API Version  1
	I0819 13:08:30.514646 2137182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:08:30.514859 2137182 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:08:30.514909 2137182 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:08:30.515063 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetState
	I0819 13:08:30.515469 2137182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:08:30.515501 2137182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:08:30.517626 2137182 kapi.go:59] client config for kubernetes-upgrade-227337: &rest.Config{Host:"https://192.168.50.233:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/client.crt", KeyFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/client.key", CAFile:"/home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uin
t8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 13:08:30.517985 2137182 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-227337"
	W0819 13:08:30.518006 2137182 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:08:30.518039 2137182 host.go:66] Checking if "kubernetes-upgrade-227337" exists ...
	I0819 13:08:30.518438 2137182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:08:30.518475 2137182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:08:30.532929 2137182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0819 13:08:30.533340 2137182 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:08:30.533892 2137182 main.go:141] libmachine: Using API Version  1
	I0819 13:08:30.533915 2137182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:08:30.534293 2137182 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:08:30.534370 2137182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45313
	I0819 13:08:30.534742 2137182 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:08:30.534898 2137182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:08:30.534946 2137182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:08:30.535220 2137182 main.go:141] libmachine: Using API Version  1
	I0819 13:08:30.535243 2137182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:08:30.535602 2137182 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:08:30.535821 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetState
	I0819 13:08:30.537494 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .DriverName
	I0819 13:08:30.539581 2137182 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:08:30.540815 2137182 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:08:30.540841 2137182 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:08:30.540865 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHHostname
	I0819 13:08:30.544290 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:08:30.544828 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:07:24 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:08:30.544860 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:08:30.545025 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHPort
	I0819 13:08:30.545202 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:08:30.545372 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHUsername
	I0819 13:08:30.545522 2137182 sshutil.go:53] new ssh client: &{IP:192.168.50.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337/id_rsa Username:docker}
	I0819 13:08:30.551487 2137182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0819 13:08:30.551926 2137182 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:08:30.552405 2137182 main.go:141] libmachine: Using API Version  1
	I0819 13:08:30.552431 2137182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:08:30.552753 2137182 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:08:30.552949 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetState
	I0819 13:08:30.554457 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .DriverName
	I0819 13:08:30.554684 2137182 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:08:30.554702 2137182 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:08:30.554720 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHHostname
	I0819 13:08:30.557564 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:08:30.558051 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:07:24 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:08:30.558078 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:08:30.558258 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHPort
	I0819 13:08:30.558440 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHKeyPath
	I0819 13:08:30.558602 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetSSHUsername
	I0819 13:08:30.558713 2137182 sshutil.go:53] new ssh client: &{IP:192.168.50.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/kubernetes-upgrade-227337/id_rsa Username:docker}
	I0819 13:08:30.662998 2137182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:08:30.681387 2137182 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:08:30.681489 2137182 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:08:30.695398 2137182 api_server.go:72] duration metric: took 198.145176ms to wait for apiserver process to appear ...
	I0819 13:08:30.695430 2137182 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:08:30.695454 2137182 api_server.go:253] Checking apiserver healthz at https://192.168.50.233:8443/healthz ...
	I0819 13:08:30.699761 2137182 api_server.go:279] https://192.168.50.233:8443/healthz returned 200:
	ok
	I0819 13:08:30.700790 2137182 api_server.go:141] control plane version: v1.31.0
	I0819 13:08:30.700813 2137182 api_server.go:131] duration metric: took 5.37508ms to wait for apiserver health ...
	I0819 13:08:30.700823 2137182 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:08:30.707160 2137182 system_pods.go:59] 8 kube-system pods found
	I0819 13:08:30.707190 2137182 system_pods.go:61] "coredns-6f6b679f8f-2mlp2" [41720865-5a4e-41cb-9a04-1b99e0e3c20e] Running
	I0819 13:08:30.707197 2137182 system_pods.go:61] "coredns-6f6b679f8f-wgkgj" [9f84103a-ef6d-4427-b63d-266586be5095] Running
	I0819 13:08:30.707206 2137182 system_pods.go:61] "etcd-kubernetes-upgrade-227337" [481c391c-43ad-4c06-85fe-2bca48632fe3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:08:30.707216 2137182 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-227337" [2d4c3d1e-51b4-4b11-9bd2-0018977a3fb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:08:30.707235 2137182 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-227337" [9977c7d2-53b1-4c40-a190-de82678c97db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:08:30.707245 2137182 system_pods.go:61] "kube-proxy-mfqrj" [36f996b2-55d5-487f-bd63-eb4aab5db2ac] Running
	I0819 13:08:30.707258 2137182 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-227337" [11f32a9f-3b17-4b53-9645-5f7b217f349a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:08:30.707266 2137182 system_pods.go:61] "storage-provisioner" [20d68ae1-b428-4386-9314-7c370460a5c1] Running
	I0819 13:08:30.707273 2137182 system_pods.go:74] duration metric: took 6.443605ms to wait for pod list to return data ...
	I0819 13:08:30.707288 2137182 kubeadm.go:582] duration metric: took 210.039853ms to wait for: map[apiserver:true system_pods:true]
	I0819 13:08:30.707307 2137182 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:08:30.710304 2137182 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:08:30.710325 2137182 node_conditions.go:123] node cpu capacity is 2
	I0819 13:08:30.710336 2137182 node_conditions.go:105] duration metric: took 3.023874ms to run NodePressure ...
	I0819 13:08:30.710353 2137182 start.go:241] waiting for startup goroutines ...
	I0819 13:08:30.781390 2137182 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:08:30.839799 2137182 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:08:26.480773 2140327 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:08:26.480810 2140327 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:08:26.480823 2140327 cache.go:56] Caching tarball of preloaded images
	I0819 13:08:26.480908 2140327 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:08:26.480922 2140327 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 13:08:26.481053 2140327 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/config.json ...
	I0819 13:08:26.481079 2140327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/config.json: {Name:mkaffbf3d7e63d668ce52ceb3da6eb068c1371ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:08:26.481252 2140327 start.go:360] acquireMachinesLock for enable-default-cni-649777: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:08:31.505045 2137182 main.go:141] libmachine: Making call to close driver server
	I0819 13:08:31.505073 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .Close
	I0819 13:08:31.505246 2137182 main.go:141] libmachine: Making call to close driver server
	I0819 13:08:31.505275 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .Close
	I0819 13:08:31.505405 2137182 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:08:31.505426 2137182 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:08:31.505436 2137182 main.go:141] libmachine: Making call to close driver server
	I0819 13:08:31.505440 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Closing plugin on server side
	I0819 13:08:31.505444 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .Close
	I0819 13:08:31.505563 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Closing plugin on server side
	I0819 13:08:31.505549 2137182 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:08:31.505591 2137182 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:08:31.505599 2137182 main.go:141] libmachine: Making call to close driver server
	I0819 13:08:31.505607 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .Close
	I0819 13:08:31.505805 2137182 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:08:31.505824 2137182 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:08:31.505841 2137182 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:08:31.505854 2137182 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:08:31.511575 2137182 main.go:141] libmachine: Making call to close driver server
	I0819 13:08:31.511596 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .Close
	I0819 13:08:31.511870 2137182 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:08:31.511882 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | Closing plugin on server side
	I0819 13:08:31.511888 2137182 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:08:31.513638 2137182 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 13:08:31.514687 2137182 addons.go:510] duration metric: took 1.017411321s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 13:08:31.514716 2137182 start.go:246] waiting for cluster config update ...
	I0819 13:08:31.514726 2137182 start.go:255] writing updated cluster config ...
	I0819 13:08:31.514936 2137182 ssh_runner.go:195] Run: rm -f paused
	I0819 13:08:31.564011 2137182 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:08:31.565717 2137182 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-227337" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.290289532Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072912290267810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b77b9e11-b272-4ca8-9578-befd2dca99fb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.290953511Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8678aa10-cbe3-4f27-a5ad-1f950f438d3b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.291013676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8678aa10-cbe3-4f27-a5ad-1f950f438d3b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.291332667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:701433078c95d4ab03f02bb761a2297607294496d6ed6595f8300b7819b7441b,PodSandboxId:67608fb7b30c2fe2855a56befbf03d8366f20ae529db2788024fd0e07cc9f8f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724072905579077028,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d755665b5f1939d8288e07dab64630,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.c
ontainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f930fc82377424c5ae92877932e087cbfa0256d87a595179b222a35a62d66859,PodSandboxId:0ff6f99ac51677093aa6c81f747906e5de5606e0a3f40c9958d9add8be8f5b49,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724072905543329180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1ada0c16e863b1d35e3a258e3a4d38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c715533ef08a2a9d6d79dae127339959bcbecfff5812e594fa9869a458a0fb,PodSandboxId:2cb906f03944f03912e93d1f33f2a2934740365e4df9a8b029dfde6e17a9a2fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724072905569675773,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6909d2fa47f505f86b984334016159ff,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4371de16879e40a31ba2f990250fd09da95627a298d596d41c618b2b2a40478e,PodSandboxId:8f5d83ce566166ca75e6268066aecc0670c63c90cc5af5396d56530f9de4513b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724072905554305869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985ed187663d0db4b4c7448cfa1679e6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61933923b0d17789330548bdb67bc0c861d8227bd79e9bfda925751ccdc73f7a,PodSandboxId:ebf945b642f09c0f95cd39bbaba68196782ef5ccfd5921c374947a535cdf1c15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724072892287847226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20d68ae1-b428-4386-9314-7c370460a5c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fc16b607ec30ab94e15a4ae0692abb753d62610392e47273e0992ba7fce3f7,PodSandboxId:12c6ba8d6e273c715c7c56a7423de98e064ea0d75f93c14fe02b24f7436baa3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724072892439947036,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfqrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f996b2-55d5-487f-bd63-eb4aab5db2ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cbc3efd919e471f9968e49c545d2539d403df5c7470c4c725c65213410c944,PodSandboxId:65de65237f33f46dd81f6061a07df9edfae7dd5eb01ac2f3eaacae3780eaa3cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724072893003063372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wgkgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84103a-ef6d-4427-b63d-266586be5095,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35655bcd05d1e6cfe7830b95d9349f48f0b6256f9f1f0f963151b7ecbfac489a,PodSandboxId:38490a7865f975f54186431705d103f28d4b0cbfe8179d518e2f093f893d0dc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724072892822894647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2mlp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41720865-5a4e-41cb-9a04-1b99e0e3c20e,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd548cf89ba215379e8aa69550e112a028e386f194f2fbc9bcf2cf13990f4eb6,PodSandboxId:67608fb7b30c2fe2855a56befbf03d8366f20ae529db2788024fd0e07cc9f8f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724072891992378277,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d755665b5f1939d8288e07dab64630,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc961a50c225132c3221e3ec190c84a95394e1fa2c337e0254e268901b5d9d30,PodSandboxId:8f5d83ce566166ca75e6268066aecc0670c63c90cc5af5396d56530f9de4513b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724072891835033437,Labe
ls:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985ed187663d0db4b4c7448cfa1679e6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdbe996d4f8be3ef9a593cfe623dbd5cabdeb24de7b668e95142cbaae5d1e66,PodSandboxId:0ff6f99ac51677093aa6c81f747906e5de5606e0a3f40c9958d9add8be8f5b49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724072891806542507,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1ada0c16e863b1d35e3a258e3a4d38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a4571b47640282b3fd951904b9e95130fd5afef51cf9fcb7211e5c16eae3eb,PodSandboxId:2cb906f03944f03912e93d1f33f2a2934740365e4df9a8b029dfde6e17a9a2fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724072891734069286,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6909d2fa47f505f86b984334016159ff,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19257bcdb0f07e067525dfa287d11ba0e23334f386c8b0639f688697ea93a2f6,PodSandboxId:122a579ba3ef901a6874a9df705c799debcf956f9b5eef671b622e17349a270c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724072876321874020,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-2mlp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41720865-5a4e-41cb-9a04-1b99e0e3c20e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98612c5bc3fd2d28d8b35dce2e923298a1627ee02fda1ad6521f504d17484d40,PodSandboxId:421241cd006568633c60dcb4b06a5aa6d0caa73fb66253303f273f62bf4e4edf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724072876271015023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wgkgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84103a-ef6d-4427-b63d-266586be5095,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41919a85227a5d7fc06a55f2c542caba7719bf315403f8fad4bbabf59aa0c827,PodSandboxId:884e21348269d26d5cdc20d24f2c8fd6930c78cc8927f700312ace1b9aecba1e,Metadata:
&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724072876090446782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20d68ae1-b428-4386-9314-7c370460a5c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8971636feb40a6be92f3ac2cd3934b1def5a7038ee47c1ad33c65b2979c66b,PodSandboxId:c2fb5234b17f44d9bf6b0cd9eac18218dc1b75724f50ad8429042aa15a2ba822,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724072875008856371,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfqrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f996b2-55d5-487f-bd63-eb4aab5db2ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8678aa10-cbe3-4f27-a5ad-1f950f438d3b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.354877691Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b4ac817-3002-41b4-81a8-9864d7cdac69 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.354968186Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b4ac817-3002-41b4-81a8-9864d7cdac69 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.356249768Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52731ad8-ab84-4b72-afa5-b0f8c6b21564 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.356724711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072912356700246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52731ad8-ab84-4b72-afa5-b0f8c6b21564 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.368783571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=720956a5-f602-4118-a58c-d35024e73a1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.368859488Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=720956a5-f602-4118-a58c-d35024e73a1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.369197074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:701433078c95d4ab03f02bb761a2297607294496d6ed6595f8300b7819b7441b,PodSandboxId:67608fb7b30c2fe2855a56befbf03d8366f20ae529db2788024fd0e07cc9f8f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724072905579077028,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d755665b5f1939d8288e07dab64630,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.c
ontainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f930fc82377424c5ae92877932e087cbfa0256d87a595179b222a35a62d66859,PodSandboxId:0ff6f99ac51677093aa6c81f747906e5de5606e0a3f40c9958d9add8be8f5b49,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724072905543329180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1ada0c16e863b1d35e3a258e3a4d38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c715533ef08a2a9d6d79dae127339959bcbecfff5812e594fa9869a458a0fb,PodSandboxId:2cb906f03944f03912e93d1f33f2a2934740365e4df9a8b029dfde6e17a9a2fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724072905569675773,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6909d2fa47f505f86b984334016159ff,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4371de16879e40a31ba2f990250fd09da95627a298d596d41c618b2b2a40478e,PodSandboxId:8f5d83ce566166ca75e6268066aecc0670c63c90cc5af5396d56530f9de4513b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724072905554305869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985ed187663d0db4b4c7448cfa1679e6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61933923b0d17789330548bdb67bc0c861d8227bd79e9bfda925751ccdc73f7a,PodSandboxId:ebf945b642f09c0f95cd39bbaba68196782ef5ccfd5921c374947a535cdf1c15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724072892287847226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20d68ae1-b428-4386-9314-7c370460a5c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fc16b607ec30ab94e15a4ae0692abb753d62610392e47273e0992ba7fce3f7,PodSandboxId:12c6ba8d6e273c715c7c56a7423de98e064ea0d75f93c14fe02b24f7436baa3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724072892439947036,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfqrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f996b2-55d5-487f-bd63-eb4aab5db2ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cbc3efd919e471f9968e49c545d2539d403df5c7470c4c725c65213410c944,PodSandboxId:65de65237f33f46dd81f6061a07df9edfae7dd5eb01ac2f3eaacae3780eaa3cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724072893003063372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wgkgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84103a-ef6d-4427-b63d-266586be5095,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35655bcd05d1e6cfe7830b95d9349f48f0b6256f9f1f0f963151b7ecbfac489a,PodSandboxId:38490a7865f975f54186431705d103f28d4b0cbfe8179d518e2f093f893d0dc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724072892822894647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2mlp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41720865-5a4e-41cb-9a04-1b99e0e3c20e,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd548cf89ba215379e8aa69550e112a028e386f194f2fbc9bcf2cf13990f4eb6,PodSandboxId:67608fb7b30c2fe2855a56befbf03d8366f20ae529db2788024fd0e07cc9f8f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724072891992378277,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d755665b5f1939d8288e07dab64630,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc961a50c225132c3221e3ec190c84a95394e1fa2c337e0254e268901b5d9d30,PodSandboxId:8f5d83ce566166ca75e6268066aecc0670c63c90cc5af5396d56530f9de4513b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724072891835033437,Labe
ls:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985ed187663d0db4b4c7448cfa1679e6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdbe996d4f8be3ef9a593cfe623dbd5cabdeb24de7b668e95142cbaae5d1e66,PodSandboxId:0ff6f99ac51677093aa6c81f747906e5de5606e0a3f40c9958d9add8be8f5b49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724072891806542507,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1ada0c16e863b1d35e3a258e3a4d38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a4571b47640282b3fd951904b9e95130fd5afef51cf9fcb7211e5c16eae3eb,PodSandboxId:2cb906f03944f03912e93d1f33f2a2934740365e4df9a8b029dfde6e17a9a2fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724072891734069286,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6909d2fa47f505f86b984334016159ff,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19257bcdb0f07e067525dfa287d11ba0e23334f386c8b0639f688697ea93a2f6,PodSandboxId:122a579ba3ef901a6874a9df705c799debcf956f9b5eef671b622e17349a270c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724072876321874020,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-2mlp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41720865-5a4e-41cb-9a04-1b99e0e3c20e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98612c5bc3fd2d28d8b35dce2e923298a1627ee02fda1ad6521f504d17484d40,PodSandboxId:421241cd006568633c60dcb4b06a5aa6d0caa73fb66253303f273f62bf4e4edf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724072876271015023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wgkgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84103a-ef6d-4427-b63d-266586be5095,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41919a85227a5d7fc06a55f2c542caba7719bf315403f8fad4bbabf59aa0c827,PodSandboxId:884e21348269d26d5cdc20d24f2c8fd6930c78cc8927f700312ace1b9aecba1e,Metadata:
&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724072876090446782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20d68ae1-b428-4386-9314-7c370460a5c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8971636feb40a6be92f3ac2cd3934b1def5a7038ee47c1ad33c65b2979c66b,PodSandboxId:c2fb5234b17f44d9bf6b0cd9eac18218dc1b75724f50ad8429042aa15a2ba822,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724072875008856371,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfqrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f996b2-55d5-487f-bd63-eb4aab5db2ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=720956a5-f602-4118-a58c-d35024e73a1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.431046396Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b3420ce-f7ee-4ea1-980b-275cc0f8c8ef name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.431136532Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b3420ce-f7ee-4ea1-980b-275cc0f8c8ef name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.433065631Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=602e9a59-164d-4eec-ab59-a20f9d2759da name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.433445613Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072912433423571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=602e9a59-164d-4eec-ab59-a20f9d2759da name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.435055085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fa6a022-d121-4310-8333-63add90ada50 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.435202911Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fa6a022-d121-4310-8333-63add90ada50 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.435679111Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:701433078c95d4ab03f02bb761a2297607294496d6ed6595f8300b7819b7441b,PodSandboxId:67608fb7b30c2fe2855a56befbf03d8366f20ae529db2788024fd0e07cc9f8f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724072905579077028,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d755665b5f1939d8288e07dab64630,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.c
ontainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f930fc82377424c5ae92877932e087cbfa0256d87a595179b222a35a62d66859,PodSandboxId:0ff6f99ac51677093aa6c81f747906e5de5606e0a3f40c9958d9add8be8f5b49,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724072905543329180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1ada0c16e863b1d35e3a258e3a4d38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c715533ef08a2a9d6d79dae127339959bcbecfff5812e594fa9869a458a0fb,PodSandboxId:2cb906f03944f03912e93d1f33f2a2934740365e4df9a8b029dfde6e17a9a2fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724072905569675773,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6909d2fa47f505f86b984334016159ff,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4371de16879e40a31ba2f990250fd09da95627a298d596d41c618b2b2a40478e,PodSandboxId:8f5d83ce566166ca75e6268066aecc0670c63c90cc5af5396d56530f9de4513b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724072905554305869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985ed187663d0db4b4c7448cfa1679e6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61933923b0d17789330548bdb67bc0c861d8227bd79e9bfda925751ccdc73f7a,PodSandboxId:ebf945b642f09c0f95cd39bbaba68196782ef5ccfd5921c374947a535cdf1c15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724072892287847226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20d68ae1-b428-4386-9314-7c370460a5c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fc16b607ec30ab94e15a4ae0692abb753d62610392e47273e0992ba7fce3f7,PodSandboxId:12c6ba8d6e273c715c7c56a7423de98e064ea0d75f93c14fe02b24f7436baa3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724072892439947036,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfqrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f996b2-55d5-487f-bd63-eb4aab5db2ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cbc3efd919e471f9968e49c545d2539d403df5c7470c4c725c65213410c944,PodSandboxId:65de65237f33f46dd81f6061a07df9edfae7dd5eb01ac2f3eaacae3780eaa3cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724072893003063372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wgkgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84103a-ef6d-4427-b63d-266586be5095,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35655bcd05d1e6cfe7830b95d9349f48f0b6256f9f1f0f963151b7ecbfac489a,PodSandboxId:38490a7865f975f54186431705d103f28d4b0cbfe8179d518e2f093f893d0dc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724072892822894647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2mlp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41720865-5a4e-41cb-9a04-1b99e0e3c20e,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd548cf89ba215379e8aa69550e112a028e386f194f2fbc9bcf2cf13990f4eb6,PodSandboxId:67608fb7b30c2fe2855a56befbf03d8366f20ae529db2788024fd0e07cc9f8f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724072891992378277,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d755665b5f1939d8288e07dab64630,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc961a50c225132c3221e3ec190c84a95394e1fa2c337e0254e268901b5d9d30,PodSandboxId:8f5d83ce566166ca75e6268066aecc0670c63c90cc5af5396d56530f9de4513b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724072891835033437,Labe
ls:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985ed187663d0db4b4c7448cfa1679e6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdbe996d4f8be3ef9a593cfe623dbd5cabdeb24de7b668e95142cbaae5d1e66,PodSandboxId:0ff6f99ac51677093aa6c81f747906e5de5606e0a3f40c9958d9add8be8f5b49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724072891806542507,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1ada0c16e863b1d35e3a258e3a4d38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a4571b47640282b3fd951904b9e95130fd5afef51cf9fcb7211e5c16eae3eb,PodSandboxId:2cb906f03944f03912e93d1f33f2a2934740365e4df9a8b029dfde6e17a9a2fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724072891734069286,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6909d2fa47f505f86b984334016159ff,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19257bcdb0f07e067525dfa287d11ba0e23334f386c8b0639f688697ea93a2f6,PodSandboxId:122a579ba3ef901a6874a9df705c799debcf956f9b5eef671b622e17349a270c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724072876321874020,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-2mlp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41720865-5a4e-41cb-9a04-1b99e0e3c20e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98612c5bc3fd2d28d8b35dce2e923298a1627ee02fda1ad6521f504d17484d40,PodSandboxId:421241cd006568633c60dcb4b06a5aa6d0caa73fb66253303f273f62bf4e4edf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724072876271015023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wgkgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84103a-ef6d-4427-b63d-266586be5095,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41919a85227a5d7fc06a55f2c542caba7719bf315403f8fad4bbabf59aa0c827,PodSandboxId:884e21348269d26d5cdc20d24f2c8fd6930c78cc8927f700312ace1b9aecba1e,Metadata:
&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724072876090446782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20d68ae1-b428-4386-9314-7c370460a5c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8971636feb40a6be92f3ac2cd3934b1def5a7038ee47c1ad33c65b2979c66b,PodSandboxId:c2fb5234b17f44d9bf6b0cd9eac18218dc1b75724f50ad8429042aa15a2ba822,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724072875008856371,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfqrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f996b2-55d5-487f-bd63-eb4aab5db2ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fa6a022-d121-4310-8333-63add90ada50 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.469897824Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=feb48419-6f49-43a1-8c8a-b888e38a1b57 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.469978118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=feb48419-6f49-43a1-8c8a-b888e38a1b57 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.471305536Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8063c806-bcca-4d51-bf3d-c1fe35f5af3f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.471724909Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072912471698331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8063c806-bcca-4d51-bf3d-c1fe35f5af3f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.472907271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fddb3cb6-0e1c-40de-b5de-56f9c8ac5d24 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.472959604Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fddb3cb6-0e1c-40de-b5de-56f9c8ac5d24 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:32 kubernetes-upgrade-227337 crio[2277]: time="2024-08-19 13:08:32.473306678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:701433078c95d4ab03f02bb761a2297607294496d6ed6595f8300b7819b7441b,PodSandboxId:67608fb7b30c2fe2855a56befbf03d8366f20ae529db2788024fd0e07cc9f8f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724072905579077028,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d755665b5f1939d8288e07dab64630,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.c
ontainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f930fc82377424c5ae92877932e087cbfa0256d87a595179b222a35a62d66859,PodSandboxId:0ff6f99ac51677093aa6c81f747906e5de5606e0a3f40c9958d9add8be8f5b49,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724072905543329180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1ada0c16e863b1d35e3a258e3a4d38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c715533ef08a2a9d6d79dae127339959bcbecfff5812e594fa9869a458a0fb,PodSandboxId:2cb906f03944f03912e93d1f33f2a2934740365e4df9a8b029dfde6e17a9a2fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724072905569675773,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6909d2fa47f505f86b984334016159ff,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4371de16879e40a31ba2f990250fd09da95627a298d596d41c618b2b2a40478e,PodSandboxId:8f5d83ce566166ca75e6268066aecc0670c63c90cc5af5396d56530f9de4513b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724072905554305869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985ed187663d0db4b4c7448cfa1679e6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61933923b0d17789330548bdb67bc0c861d8227bd79e9bfda925751ccdc73f7a,PodSandboxId:ebf945b642f09c0f95cd39bbaba68196782ef5ccfd5921c374947a535cdf1c15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724072892287847226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20d68ae1-b428-4386-9314-7c370460a5c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fc16b607ec30ab94e15a4ae0692abb753d62610392e47273e0992ba7fce3f7,PodSandboxId:12c6ba8d6e273c715c7c56a7423de98e064ea0d75f93c14fe02b24f7436baa3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724072892439947036,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfqrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f996b2-55d5-487f-bd63-eb4aab5db2ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cbc3efd919e471f9968e49c545d2539d403df5c7470c4c725c65213410c944,PodSandboxId:65de65237f33f46dd81f6061a07df9edfae7dd5eb01ac2f3eaacae3780eaa3cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724072893003063372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wgkgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84103a-ef6d-4427-b63d-266586be5095,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35655bcd05d1e6cfe7830b95d9349f48f0b6256f9f1f0f963151b7ecbfac489a,PodSandboxId:38490a7865f975f54186431705d103f28d4b0cbfe8179d518e2f093f893d0dc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724072892822894647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2mlp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41720865-5a4e-41cb-9a04-1b99e0e3c20e,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd548cf89ba215379e8aa69550e112a028e386f194f2fbc9bcf2cf13990f4eb6,PodSandboxId:67608fb7b30c2fe2855a56befbf03d8366f20ae529db2788024fd0e07cc9f8f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724072891992378277,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d755665b5f1939d8288e07dab64630,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc961a50c225132c3221e3ec190c84a95394e1fa2c337e0254e268901b5d9d30,PodSandboxId:8f5d83ce566166ca75e6268066aecc0670c63c90cc5af5396d56530f9de4513b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724072891835033437,Labe
ls:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985ed187663d0db4b4c7448cfa1679e6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdbe996d4f8be3ef9a593cfe623dbd5cabdeb24de7b668e95142cbaae5d1e66,PodSandboxId:0ff6f99ac51677093aa6c81f747906e5de5606e0a3f40c9958d9add8be8f5b49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724072891806542507,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1ada0c16e863b1d35e3a258e3a4d38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a4571b47640282b3fd951904b9e95130fd5afef51cf9fcb7211e5c16eae3eb,PodSandboxId:2cb906f03944f03912e93d1f33f2a2934740365e4df9a8b029dfde6e17a9a2fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724072891734069286,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-227337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6909d2fa47f505f86b984334016159ff,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19257bcdb0f07e067525dfa287d11ba0e23334f386c8b0639f688697ea93a2f6,PodSandboxId:122a579ba3ef901a6874a9df705c799debcf956f9b5eef671b622e17349a270c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724072876321874020,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-2mlp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41720865-5a4e-41cb-9a04-1b99e0e3c20e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98612c5bc3fd2d28d8b35dce2e923298a1627ee02fda1ad6521f504d17484d40,PodSandboxId:421241cd006568633c60dcb4b06a5aa6d0caa73fb66253303f273f62bf4e4edf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724072876271015023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wgkgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84103a-ef6d-4427-b63d-266586be5095,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41919a85227a5d7fc06a55f2c542caba7719bf315403f8fad4bbabf59aa0c827,PodSandboxId:884e21348269d26d5cdc20d24f2c8fd6930c78cc8927f700312ace1b9aecba1e,Metadata:
&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724072876090446782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20d68ae1-b428-4386-9314-7c370460a5c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8971636feb40a6be92f3ac2cd3934b1def5a7038ee47c1ad33c65b2979c66b,PodSandboxId:c2fb5234b17f44d9bf6b0cd9eac18218dc1b75724f50ad8429042aa15a2ba822,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724072875008856371,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfqrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f996b2-55d5-487f-bd63-eb4aab5db2ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fddb3cb6-0e1c-40de-b5de-56f9c8ac5d24 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	701433078c95d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   6 seconds ago       Running             kube-controller-manager   2                   67608fb7b30c2       kube-controller-manager-kubernetes-upgrade-227337
	92c715533ef08       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   6 seconds ago       Running             kube-scheduler            2                   2cb906f03944f       kube-scheduler-kubernetes-upgrade-227337
	4371de16879e4       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   7 seconds ago       Running             kube-apiserver            2                   8f5d83ce56616       kube-apiserver-kubernetes-upgrade-227337
	f930fc8237742       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   0ff6f99ac5167       etcd-kubernetes-upgrade-227337
	33cbc3efd919e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   1                   65de65237f33f       coredns-6f6b679f8f-wgkgj
	35655bcd05d1e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   1                   38490a7865f97       coredns-6f6b679f8f-2mlp2
	31fc16b607ec3       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   20 seconds ago      Running             kube-proxy                1                   12c6ba8d6e273       kube-proxy-mfqrj
	61933923b0d17       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   20 seconds ago      Running             storage-provisioner       1                   ebf945b642f09       storage-provisioner
	bd548cf89ba21       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   20 seconds ago      Exited              kube-controller-manager   1                   67608fb7b30c2       kube-controller-manager-kubernetes-upgrade-227337
	bc961a50c2251       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   20 seconds ago      Exited              kube-apiserver            1                   8f5d83ce56616       kube-apiserver-kubernetes-upgrade-227337
	8cdbe996d4f8b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   20 seconds ago      Exited              etcd                      1                   0ff6f99ac5167       etcd-kubernetes-upgrade-227337
	a6a4571b47640       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   20 seconds ago      Exited              kube-scheduler            1                   2cb906f03944f       kube-scheduler-kubernetes-upgrade-227337
	19257bcdb0f07       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   36 seconds ago      Exited              coredns                   0                   122a579ba3ef9       coredns-6f6b679f8f-2mlp2
	98612c5bc3fd2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   36 seconds ago      Exited              coredns                   0                   421241cd00656       coredns-6f6b679f8f-wgkgj
	41919a85227a5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   36 seconds ago      Exited              storage-provisioner       0                   884e21348269d       storage-provisioner
	9a8971636feb4       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   37 seconds ago      Exited              kube-proxy                0                   c2fb5234b17f4       kube-proxy-mfqrj
	
	
	==> coredns [19257bcdb0f07e067525dfa287d11ba0e23334f386c8b0639f688697ea93a2f6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [33cbc3efd919e471f9968e49c545d2539d403df5c7470c4c725c65213410c944] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [35655bcd05d1e6cfe7830b95d9349f48f0b6256f9f1f0f963151b7ecbfac489a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [98612c5bc3fd2d28d8b35dce2e923298a1627ee02fda1ad6521f504d17484d40] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-227337
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-227337
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 13:07:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-227337
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 13:08:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 13:08:28 +0000   Mon, 19 Aug 2024 13:07:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 13:08:28 +0000   Mon, 19 Aug 2024 13:07:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 13:08:28 +0000   Mon, 19 Aug 2024 13:07:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 13:08:28 +0000   Mon, 19 Aug 2024 13:07:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.233
	  Hostname:    kubernetes-upgrade-227337
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c63e9694d02946b6a5701ca2009cec1a
	  System UUID:                c63e9694-d029-46b6-a570-1ca2009cec1a
	  Boot ID:                    b38c62e1-58fe-4639-a2e9-ba525c4f42a6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-2mlp2                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     38s
	  kube-system                 coredns-6f6b679f8f-wgkgj                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     38s
	  kube-system                 etcd-kubernetes-upgrade-227337                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         39s
	  kube-system                 kube-apiserver-kubernetes-upgrade-227337             250m (12%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-227337    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-mfqrj                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-scheduler-kubernetes-upgrade-227337             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 37s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node kubernetes-upgrade-227337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x7 over 52s)  kubelet          Node kubernetes-upgrade-227337 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  52s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node kubernetes-upgrade-227337 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           41s                node-controller  Node kubernetes-upgrade-227337 event: Registered Node kubernetes-upgrade-227337 in Controller
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 7s)    kubelet          Node kubernetes-upgrade-227337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 7s)    kubelet          Node kubernetes-upgrade-227337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x7 over 7s)    kubelet          Node kubernetes-upgrade-227337 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           0s                 node-controller  Node kubernetes-upgrade-227337 event: Registered Node kubernetes-upgrade-227337 in Controller
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.044278] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.062973] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060822] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.210847] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.148536] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.292837] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +4.500428] systemd-fstab-generator[732]: Ignoring "noauto" option for root device
	[  +0.080537] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.026757] systemd-fstab-generator[852]: Ignoring "noauto" option for root device
	[ +14.807738] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	[  +0.119560] kauditd_printk_skb: 97 callbacks suppressed
	[Aug19 13:08] systemd-fstab-generator[2197]: Ignoring "noauto" option for root device
	[  +0.095362] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.071287] systemd-fstab-generator[2209]: Ignoring "noauto" option for root device
	[  +0.187079] systemd-fstab-generator[2223]: Ignoring "noauto" option for root device
	[  +0.193536] systemd-fstab-generator[2235]: Ignoring "noauto" option for root device
	[  +0.315555] systemd-fstab-generator[2263]: Ignoring "noauto" option for root device
	[  +6.159435] systemd-fstab-generator[2415]: Ignoring "noauto" option for root device
	[  +0.083319] kauditd_printk_skb: 100 callbacks suppressed
	[ +13.855884] systemd-fstab-generator[3425]: Ignoring "noauto" option for root device
	[  +0.110377] kauditd_printk_skb: 119 callbacks suppressed
	[  +5.647399] systemd-fstab-generator[3704]: Ignoring "noauto" option for root device
	[  +0.088866] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> etcd [8cdbe996d4f8be3ef9a593cfe623dbd5cabdeb24de7b668e95142cbaae5d1e66] <==
	{"level":"info","ts":"2024-08-19T13:08:14.476539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8ba74716588cc23 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T13:08:14.476635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8ba74716588cc23 received MsgPreVoteResp from d8ba74716588cc23 at term 2"}
	{"level":"info","ts":"2024-08-19T13:08:14.476660Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8ba74716588cc23 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T13:08:14.476690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8ba74716588cc23 received MsgVoteResp from d8ba74716588cc23 at term 3"}
	{"level":"info","ts":"2024-08-19T13:08:14.476704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8ba74716588cc23 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T13:08:14.476733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d8ba74716588cc23 elected leader d8ba74716588cc23 at term 3"}
	{"level":"info","ts":"2024-08-19T13:08:14.478988Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d8ba74716588cc23","local-member-attributes":"{Name:kubernetes-upgrade-227337 ClientURLs:[https://192.168.50.233:2379]}","request-path":"/0/members/d8ba74716588cc23/attributes","cluster-id":"3716845a304565c2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T13:08:14.479039Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:08:14.479250Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T13:08:14.479294Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T13:08:14.479331Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:08:14.480177Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:08:14.480365Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:08:14.481115Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.233:2379"}
	{"level":"info","ts":"2024-08-19T13:08:14.481520Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T13:08:23.160446Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T13:08:23.160511Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-227337","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.233:2380"],"advertise-client-urls":["https://192.168.50.233:2379"]}
	{"level":"warn","ts":"2024-08-19T13:08:23.160764Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T13:08:23.160812Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T13:08:23.161494Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.233:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T13:08:23.161674Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.233:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T13:08:23.163300Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d8ba74716588cc23","current-leader-member-id":"d8ba74716588cc23"}
	{"level":"info","ts":"2024-08-19T13:08:23.167791Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.233:2380"}
	{"level":"info","ts":"2024-08-19T13:08:23.167945Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.233:2380"}
	{"level":"info","ts":"2024-08-19T13:08:23.167969Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-227337","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.233:2380"],"advertise-client-urls":["https://192.168.50.233:2379"]}
	
	
	==> etcd [f930fc82377424c5ae92877932e087cbfa0256d87a595179b222a35a62d66859] <==
	{"level":"info","ts":"2024-08-19T13:08:26.014110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8ba74716588cc23 switched to configuration voters=(15616922688244206627)"}
	{"level":"info","ts":"2024-08-19T13:08:26.014228Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3716845a304565c2","local-member-id":"d8ba74716588cc23","added-peer-id":"d8ba74716588cc23","added-peer-peer-urls":["https://192.168.50.233:2380"]}
	{"level":"info","ts":"2024-08-19T13:08:26.014344Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3716845a304565c2","local-member-id":"d8ba74716588cc23","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:08:26.014385Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:08:26.036745Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T13:08:26.038621Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.233:2380"}
	{"level":"info","ts":"2024-08-19T13:08:26.038855Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.233:2380"}
	{"level":"info","ts":"2024-08-19T13:08:26.044955Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d8ba74716588cc23","initial-advertise-peer-urls":["https://192.168.50.233:2380"],"listen-peer-urls":["https://192.168.50.233:2380"],"advertise-client-urls":["https://192.168.50.233:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.233:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T13:08:26.045820Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T13:08:27.453098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8ba74716588cc23 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-19T13:08:27.453153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8ba74716588cc23 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-19T13:08:27.453184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8ba74716588cc23 received MsgPreVoteResp from d8ba74716588cc23 at term 3"}
	{"level":"info","ts":"2024-08-19T13:08:27.453200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8ba74716588cc23 became candidate at term 4"}
	{"level":"info","ts":"2024-08-19T13:08:27.453205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8ba74716588cc23 received MsgVoteResp from d8ba74716588cc23 at term 4"}
	{"level":"info","ts":"2024-08-19T13:08:27.453215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8ba74716588cc23 became leader at term 4"}
	{"level":"info","ts":"2024-08-19T13:08:27.453222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d8ba74716588cc23 elected leader d8ba74716588cc23 at term 4"}
	{"level":"info","ts":"2024-08-19T13:08:27.458739Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d8ba74716588cc23","local-member-attributes":"{Name:kubernetes-upgrade-227337 ClientURLs:[https://192.168.50.233:2379]}","request-path":"/0/members/d8ba74716588cc23/attributes","cluster-id":"3716845a304565c2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T13:08:27.458787Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:08:27.458980Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T13:08:27.459021Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T13:08:27.459049Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:08:27.459989Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:08:27.459988Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:08:27.460773Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T13:08:27.461506Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.233:2379"}
	
	
	==> kernel <==
	 13:08:32 up 1 min,  0 users,  load average: 1.37, 0.36, 0.12
	Linux kubernetes-upgrade-227337 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4371de16879e40a31ba2f990250fd09da95627a298d596d41c618b2b2a40478e] <==
	I0819 13:08:28.811673       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 13:08:28.811846       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 13:08:28.811959       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 13:08:28.812386       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 13:08:28.812731       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 13:08:28.813117       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 13:08:28.814140       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 13:08:28.819722       1 aggregator.go:171] initial CRD sync complete...
	I0819 13:08:28.819791       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 13:08:28.819818       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 13:08:28.819897       1 cache.go:39] Caches are synced for autoregister controller
	I0819 13:08:28.820115       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 13:08:28.835644       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 13:08:28.864880       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 13:08:28.864978       1 policy_source.go:224] refreshing policies
	I0819 13:08:28.877843       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 13:08:29.616758       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 13:08:29.834978       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.233]
	I0819 13:08:29.836331       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 13:08:29.844130       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 13:08:30.262338       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 13:08:30.276465       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 13:08:30.309190       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 13:08:30.450648       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 13:08:30.457415       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [bc961a50c225132c3221e3ec190c84a95394e1fa2c337e0254e268901b5d9d30] <==
	I0819 13:08:16.213009       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 13:08:16.213203       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 13:08:16.217684       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0819 13:08:16.217802       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 13:08:16.217869       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0819 13:08:16.223184       1 controller.go:157] Shutting down quota evaluator
	I0819 13:08:16.223222       1 controller.go:176] quota evaluator worker shutdown
	I0819 13:08:16.223515       1 controller.go:176] quota evaluator worker shutdown
	I0819 13:08:16.223615       1 controller.go:176] quota evaluator worker shutdown
	I0819 13:08:16.223635       1 controller.go:176] quota evaluator worker shutdown
	I0819 13:08:16.223642       1 controller.go:176] quota evaluator worker shutdown
	W0819 13:08:16.966179       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0819 13:08:16.968822       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0819 13:08:17.966304       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0819 13:08:17.969045       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0819 13:08:18.966187       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0819 13:08:18.969111       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0819 13:08:19.966656       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0819 13:08:19.969276       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0819 13:08:20.966607       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0819 13:08:20.969468       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0819 13:08:21.966341       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0819 13:08:21.969090       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0819 13:08:22.966268       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0819 13:08:22.969292       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-controller-manager [701433078c95d4ab03f02bb761a2297607294496d6ed6595f8300b7819b7441b] <==
	I0819 13:08:32.093834       1 shared_informer.go:320] Caches are synced for HPA
	I0819 13:08:32.105943       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0819 13:08:32.105994       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0819 13:08:32.106110       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0819 13:08:32.106177       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0819 13:08:32.122375       1 shared_informer.go:320] Caches are synced for stateful set
	I0819 13:08:32.122407       1 shared_informer.go:320] Caches are synced for disruption
	I0819 13:08:32.122429       1 shared_informer.go:320] Caches are synced for attach detach
	I0819 13:08:32.122422       1 shared_informer.go:320] Caches are synced for GC
	I0819 13:08:32.122535       1 shared_informer.go:320] Caches are synced for taint
	I0819 13:08:32.122784       1 shared_informer.go:320] Caches are synced for job
	I0819 13:08:32.122929       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0819 13:08:32.123078       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-227337"
	I0819 13:08:32.123132       1 shared_informer.go:320] Caches are synced for PVC protection
	I0819 13:08:32.123150       1 shared_informer.go:320] Caches are synced for endpoint
	I0819 13:08:32.123141       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0819 13:08:32.125910       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0819 13:08:32.272695       1 shared_informer.go:320] Caches are synced for cronjob
	I0819 13:08:32.279784       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 13:08:32.295050       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 13:08:32.414968       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="342.841458ms"
	I0819 13:08:32.415707       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="57.461µs"
	I0819 13:08:32.726683       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 13:08:32.730827       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 13:08:32.730855       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [bd548cf89ba215379e8aa69550e112a028e386f194f2fbc9bcf2cf13990f4eb6] <==
	
	
	==> kube-proxy [31fc16b607ec30ab94e15a4ae0692abb753d62610392e47273e0992ba7fce3f7] <==
	E0819 13:08:16.247892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.233:8443: connect: connection refused" logger="UnhandledError"
	I0819 13:08:16.248177       1 config.go:326] "Starting node config controller"
	I0819 13:08:16.248219       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0819 13:08:16.248326       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-227337&limit=500&resourceVersion=0": dial tcp 192.168.50.233:8443: connect: connection refused
	E0819 13:08:16.248393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-227337&limit=500&resourceVersion=0\": dial tcp 192.168.50.233:8443: connect: connection refused" logger="UnhandledError"
	E0819 13:08:16.249003       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.50.233:8443: connect: connection refused"
	W0819 13:08:17.138522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.233:8443: connect: connection refused
	E0819 13:08:17.139387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.233:8443: connect: connection refused" logger="UnhandledError"
	W0819 13:08:17.503663       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-227337&limit=500&resourceVersion=0": dial tcp 192.168.50.233:8443: connect: connection refused
	E0819 13:08:17.503748       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-227337&limit=500&resourceVersion=0\": dial tcp 192.168.50.233:8443: connect: connection refused" logger="UnhandledError"
	W0819 13:08:17.808968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.233:8443: connect: connection refused
	E0819 13:08:17.809062       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.233:8443: connect: connection refused" logger="UnhandledError"
	W0819 13:08:19.081515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.233:8443: connect: connection refused
	E0819 13:08:19.081817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.233:8443: connect: connection refused" logger="UnhandledError"
	W0819 13:08:19.796852       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-227337&limit=500&resourceVersion=0": dial tcp 192.168.50.233:8443: connect: connection refused
	E0819 13:08:19.797044       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-227337&limit=500&resourceVersion=0\": dial tcp 192.168.50.233:8443: connect: connection refused" logger="UnhandledError"
	W0819 13:08:20.705052       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.233:8443: connect: connection refused
	E0819 13:08:20.705097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.233:8443: connect: connection refused" logger="UnhandledError"
	W0819 13:08:22.728235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.233:8443: connect: connection refused
	E0819 13:08:22.728365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.233:8443: connect: connection refused" logger="UnhandledError"
	W0819 13:08:23.780295       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-227337&limit=500&resourceVersion=0": dial tcp 192.168.50.233:8443: connect: connection refused
	E0819 13:08:23.780367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-227337&limit=500&resourceVersion=0\": dial tcp 192.168.50.233:8443: connect: connection refused" logger="UnhandledError"
	W0819 13:08:24.300666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.233:8443: connect: connection refused
	E0819 13:08:24.300776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.233:8443: connect: connection refused" logger="UnhandledError"
	I0819 13:08:32.548973       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9a8971636feb40a6be92f3ac2cd3934b1def5a7038ee47c1ad33c65b2979c66b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 13:07:55.345640       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 13:07:55.355477       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.233"]
	E0819 13:07:55.355683       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 13:07:55.425997       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 13:07:55.426031       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 13:07:55.426058       1 server_linux.go:169] "Using iptables Proxier"
	I0819 13:07:55.428514       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 13:07:55.429190       1 server.go:483] "Version info" version="v1.31.0"
	I0819 13:07:55.429212       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:07:55.432280       1 config.go:197] "Starting service config controller"
	I0819 13:07:55.432423       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 13:07:55.432755       1 config.go:104] "Starting endpoint slice config controller"
	I0819 13:07:55.432822       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 13:07:55.438432       1 config.go:326] "Starting node config controller"
	I0819 13:07:55.438501       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 13:07:55.533708       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 13:07:55.533768       1 shared_informer.go:320] Caches are synced for service config
	I0819 13:07:55.540041       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [92c715533ef08a2a9d6d79dae127339959bcbecfff5812e594fa9869a458a0fb] <==
	I0819 13:08:26.535095       1 serving.go:386] Generated self-signed cert in-memory
	W0819 13:08:28.712954       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 13:08:28.713045       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 13:08:28.713072       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 13:08:28.713095       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 13:08:28.785463       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 13:08:28.785504       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:08:28.791517       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 13:08:28.791721       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 13:08:28.796093       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 13:08:28.791748       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 13:08:28.896779       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a6a4571b47640282b3fd951904b9e95130fd5afef51cf9fcb7211e5c16eae3eb] <==
	I0819 13:08:13.344183       1 serving.go:386] Generated self-signed cert in-memory
	W0819 13:08:16.019973       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 13:08:16.020022       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 13:08:16.020033       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 13:08:16.020042       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 13:08:16.090324       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 13:08:16.090645       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:08:16.097116       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 13:08:16.097186       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 13:08:16.097660       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 13:08:16.097696       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 13:08:16.199023       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 13:08:23.301405       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 13:08:25 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:25.312526    3432 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/5e1ada0c16e863b1d35e3a258e3a4d38-etcd-data\") pod \"etcd-kubernetes-upgrade-227337\" (UID: \"5e1ada0c16e863b1d35e3a258e3a4d38\") " pod="kube-system/etcd-kubernetes-upgrade-227337"
	Aug 19 13:08:25 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:25.312541    3432 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/985ed187663d0db4b4c7448cfa1679e6-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-227337\" (UID: \"985ed187663d0db4b4c7448cfa1679e6\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-227337"
	Aug 19 13:08:25 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:25.312636    3432 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/985ed187663d0db4b4c7448cfa1679e6-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-227337\" (UID: \"985ed187663d0db4b4c7448cfa1679e6\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-227337"
	Aug 19 13:08:25 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:25.312743    3432 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/985ed187663d0db4b4c7448cfa1679e6-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-227337\" (UID: \"985ed187663d0db4b4c7448cfa1679e6\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-227337"
	Aug 19 13:08:25 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:25.312773    3432 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/12d755665b5f1939d8288e07dab64630-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-227337\" (UID: \"12d755665b5f1939d8288e07dab64630\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-227337"
	Aug 19 13:08:25 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:25.312793    3432 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/12d755665b5f1939d8288e07dab64630-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-227337\" (UID: \"12d755665b5f1939d8288e07dab64630\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-227337"
	Aug 19 13:08:25 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:25.464441    3432 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-227337"
	Aug 19 13:08:25 kubernetes-upgrade-227337 kubelet[3432]: E0819 13:08:25.465403    3432 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.233:8443: connect: connection refused" node="kubernetes-upgrade-227337"
	Aug 19 13:08:25 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:25.529923    3432 scope.go:117] "RemoveContainer" containerID="8cdbe996d4f8be3ef9a593cfe623dbd5cabdeb24de7b668e95142cbaae5d1e66"
	Aug 19 13:08:25 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:25.530024    3432 scope.go:117] "RemoveContainer" containerID="bc961a50c225132c3221e3ec190c84a95394e1fa2c337e0254e268901b5d9d30"
	Aug 19 13:08:25 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:25.533002    3432 scope.go:117] "RemoveContainer" containerID="bd548cf89ba215379e8aa69550e112a028e386f194f2fbc9bcf2cf13990f4eb6"
	Aug 19 13:08:25 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:25.533138    3432 scope.go:117] "RemoveContainer" containerID="a6a4571b47640282b3fd951904b9e95130fd5afef51cf9fcb7211e5c16eae3eb"
	Aug 19 13:08:25 kubernetes-upgrade-227337 kubelet[3432]: E0819 13:08:25.710050    3432 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-227337?timeout=10s\": dial tcp 192.168.50.233:8443: connect: connection refused" interval="800ms"
	Aug 19 13:08:25 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:25.867263    3432 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-227337"
	Aug 19 13:08:25 kubernetes-upgrade-227337 kubelet[3432]: E0819 13:08:25.868334    3432 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.233:8443: connect: connection refused" node="kubernetes-upgrade-227337"
	Aug 19 13:08:26 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:26.669821    3432 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-227337"
	Aug 19 13:08:28 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:28.897423    3432 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-227337"
	Aug 19 13:08:28 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:28.898287    3432 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-227337"
	Aug 19 13:08:28 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:28.898436    3432 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 19 13:08:28 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:28.899829    3432 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 19 13:08:29 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:29.058977    3432 apiserver.go:52] "Watching apiserver"
	Aug 19 13:08:29 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:29.095232    3432 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 19 13:08:29 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:29.165225    3432 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36f996b2-55d5-487f-bd63-eb4aab5db2ac-lib-modules\") pod \"kube-proxy-mfqrj\" (UID: \"36f996b2-55d5-487f-bd63-eb4aab5db2ac\") " pod="kube-system/kube-proxy-mfqrj"
	Aug 19 13:08:29 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:29.165283    3432 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36f996b2-55d5-487f-bd63-eb4aab5db2ac-xtables-lock\") pod \"kube-proxy-mfqrj\" (UID: \"36f996b2-55d5-487f-bd63-eb4aab5db2ac\") " pod="kube-system/kube-proxy-mfqrj"
	Aug 19 13:08:29 kubernetes-upgrade-227337 kubelet[3432]: I0819 13:08:29.165317    3432 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/20d68ae1-b428-4386-9314-7c370460a5c1-tmp\") pod \"storage-provisioner\" (UID: \"20d68ae1-b428-4386-9314-7c370460a5c1\") " pod="kube-system/storage-provisioner"
	
	
	==> storage-provisioner [41919a85227a5d7fc06a55f2c542caba7719bf315403f8fad4bbabf59aa0c827] <==
	I0819 13:07:56.292231       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 13:07:56.316731       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 13:07:56.316853       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 13:07:56.326663       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 13:07:56.326897       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-227337_70f55ee1-f567-451c-b4de-49362c164f38!
	I0819 13:07:56.327887       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81eb9b3b-5a31-4c60-86db-4c9303b8e04b", APIVersion:"v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-227337_70f55ee1-f567-451c-b4de-49362c164f38 became leader
	I0819 13:07:56.428756       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-227337_70f55ee1-f567-451c-b4de-49362c164f38!
	
	
	==> storage-provisioner [61933923b0d17789330548bdb67bc0c861d8227bd79e9bfda925751ccdc73f7a] <==
	I0819 13:08:13.454735       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 13:08:16.106857       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 13:08:16.107101       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0819 13:08:19.585229       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0819 13:08:23.843859       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0819 13:08:31.878460       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 13:08:31.879005       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81eb9b3b-5a31-4c60-86db-4c9303b8e04b", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-227337_57e3b29f-0f84-435d-ac50-2870b4ed631b became leader
	I0819 13:08:31.879154       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-227337_57e3b29f-0f84-435d-ac50-2870b4ed631b!
	I0819 13:08:31.979407       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-227337_57e3b29f-0f84-435d-ac50-2870b4ed631b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-227337 -n kubernetes-upgrade-227337
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-227337 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-227337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-227337
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-227337: (1.14363312s)
--- FAIL: TestKubernetesUpgrade (390.37s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-698949 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-698949 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.616478463s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-698949] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-698949" primary control-plane node in "pause-698949" cluster
	* Updating the running kvm2 "pause-698949" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-698949" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:07:26.765586 2136755 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:07:26.765759 2136755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:07:26.765775 2136755 out.go:358] Setting ErrFile to fd 2...
	I0819 13:07:26.765781 2136755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:07:26.766063 2136755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:07:26.766870 2136755 out.go:352] Setting JSON to false
	I0819 13:07:26.768319 2136755 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":164997,"bootTime":1723907850,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:07:26.768405 2136755 start.go:139] virtualization: kvm guest
	I0819 13:07:26.770006 2136755 out.go:177] * [pause-698949] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:07:26.771533 2136755 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:07:26.771563 2136755 notify.go:220] Checking for updates...
	I0819 13:07:26.773657 2136755 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:07:26.774808 2136755 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:07:26.776027 2136755 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:07:26.777301 2136755 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:07:26.778461 2136755 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:07:26.780215 2136755 config.go:182] Loaded profile config "pause-698949": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:07:26.780835 2136755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:07:26.780922 2136755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:07:26.797737 2136755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I0819 13:07:26.798266 2136755 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:07:26.798973 2136755 main.go:141] libmachine: Using API Version  1
	I0819 13:07:26.799002 2136755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:07:26.799369 2136755 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:07:26.799577 2136755 main.go:141] libmachine: (pause-698949) Calling .DriverName
	I0819 13:07:26.799898 2136755 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:07:26.800365 2136755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:07:26.800414 2136755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:07:26.816185 2136755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I0819 13:07:26.816730 2136755 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:07:26.817337 2136755 main.go:141] libmachine: Using API Version  1
	I0819 13:07:26.817365 2136755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:07:26.817678 2136755 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:07:26.817867 2136755 main.go:141] libmachine: (pause-698949) Calling .DriverName
	I0819 13:07:26.860054 2136755 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 13:07:26.861280 2136755 start.go:297] selected driver: kvm2
	I0819 13:07:26.861315 2136755 start.go:901] validating driver "kvm2" against &{Name:pause-698949 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-698949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:07:26.861454 2136755 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:07:26.861759 2136755 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:07:26.861833 2136755 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:07:26.878732 2136755 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:07:26.879625 2136755 cni.go:84] Creating CNI manager for ""
	I0819 13:07:26.879647 2136755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:07:26.879708 2136755 start.go:340] cluster config:
	{Name:pause-698949 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-698949 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:07:26.879865 2136755 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:07:26.881703 2136755 out.go:177] * Starting "pause-698949" primary control-plane node in "pause-698949" cluster
	I0819 13:07:26.882874 2136755 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:07:26.882917 2136755 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:07:26.882925 2136755 cache.go:56] Caching tarball of preloaded images
	I0819 13:07:26.883026 2136755 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:07:26.883040 2136755 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 13:07:26.883176 2136755 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/pause-698949/config.json ...
	I0819 13:07:26.883468 2136755 start.go:360] acquireMachinesLock for pause-698949: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:07:32.077379 2136755 start.go:364] duration metric: took 5.193857533s to acquireMachinesLock for "pause-698949"
	I0819 13:07:32.077440 2136755 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:07:32.077451 2136755 fix.go:54] fixHost starting: 
	I0819 13:07:32.077878 2136755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:07:32.077933 2136755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:07:32.098287 2136755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42775
	I0819 13:07:32.098753 2136755 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:07:32.099343 2136755 main.go:141] libmachine: Using API Version  1
	I0819 13:07:32.099370 2136755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:07:32.099734 2136755 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:07:32.099950 2136755 main.go:141] libmachine: (pause-698949) Calling .DriverName
	I0819 13:07:32.100113 2136755 main.go:141] libmachine: (pause-698949) Calling .GetState
	I0819 13:07:32.101850 2136755 fix.go:112] recreateIfNeeded on pause-698949: state=Running err=<nil>
	W0819 13:07:32.101874 2136755 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:07:32.103685 2136755 out.go:177] * Updating the running kvm2 "pause-698949" VM ...
	I0819 13:07:32.104875 2136755 machine.go:93] provisionDockerMachine start ...
	I0819 13:07:32.104900 2136755 main.go:141] libmachine: (pause-698949) Calling .DriverName
	I0819 13:07:32.105110 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHHostname
	I0819 13:07:32.108021 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:32.108513 2136755 main.go:141] libmachine: (pause-698949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:72:58", ip: ""} in network mk-pause-698949: {Iface:virbr3 ExpiryTime:2024-08-19 14:06:10 +0000 UTC Type:0 Mac:52:54:00:c3:72:58 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:pause-698949 Clientid:01:52:54:00:c3:72:58}
	I0819 13:07:32.108550 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined IP address 192.168.39.221 and MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:32.108766 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHPort
	I0819 13:07:32.109003 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHKeyPath
	I0819 13:07:32.109208 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHKeyPath
	I0819 13:07:32.109419 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHUsername
	I0819 13:07:32.109610 2136755 main.go:141] libmachine: Using SSH client type: native
	I0819 13:07:32.109865 2136755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0819 13:07:32.109882 2136755 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:07:32.228582 2136755 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-698949
	
	I0819 13:07:32.228621 2136755 main.go:141] libmachine: (pause-698949) Calling .GetMachineName
	I0819 13:07:32.228929 2136755 buildroot.go:166] provisioning hostname "pause-698949"
	I0819 13:07:32.228966 2136755 main.go:141] libmachine: (pause-698949) Calling .GetMachineName
	I0819 13:07:32.229183 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHHostname
	I0819 13:07:32.232253 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:32.232636 2136755 main.go:141] libmachine: (pause-698949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:72:58", ip: ""} in network mk-pause-698949: {Iface:virbr3 ExpiryTime:2024-08-19 14:06:10 +0000 UTC Type:0 Mac:52:54:00:c3:72:58 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:pause-698949 Clientid:01:52:54:00:c3:72:58}
	I0819 13:07:32.232660 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined IP address 192.168.39.221 and MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:32.232821 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHPort
	I0819 13:07:32.233016 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHKeyPath
	I0819 13:07:32.233172 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHKeyPath
	I0819 13:07:32.233316 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHUsername
	I0819 13:07:32.233502 2136755 main.go:141] libmachine: Using SSH client type: native
	I0819 13:07:32.233676 2136755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0819 13:07:32.233691 2136755 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-698949 && echo "pause-698949" | sudo tee /etc/hostname
	I0819 13:07:32.370177 2136755 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-698949
	
	I0819 13:07:32.370215 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHHostname
	I0819 13:07:32.373641 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:32.374032 2136755 main.go:141] libmachine: (pause-698949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:72:58", ip: ""} in network mk-pause-698949: {Iface:virbr3 ExpiryTime:2024-08-19 14:06:10 +0000 UTC Type:0 Mac:52:54:00:c3:72:58 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:pause-698949 Clientid:01:52:54:00:c3:72:58}
	I0819 13:07:32.374060 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined IP address 192.168.39.221 and MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:32.374208 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHPort
	I0819 13:07:32.374441 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHKeyPath
	I0819 13:07:32.374656 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHKeyPath
	I0819 13:07:32.374853 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHUsername
	I0819 13:07:32.375030 2136755 main.go:141] libmachine: Using SSH client type: native
	I0819 13:07:32.375252 2136755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0819 13:07:32.375288 2136755 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-698949' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-698949/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-698949' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:07:32.501683 2136755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:07:32.501719 2136755 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:07:32.501773 2136755 buildroot.go:174] setting up certificates
	I0819 13:07:32.501788 2136755 provision.go:84] configureAuth start
	I0819 13:07:32.501810 2136755 main.go:141] libmachine: (pause-698949) Calling .GetMachineName
	I0819 13:07:32.502093 2136755 main.go:141] libmachine: (pause-698949) Calling .GetIP
	I0819 13:07:32.505274 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:32.505697 2136755 main.go:141] libmachine: (pause-698949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:72:58", ip: ""} in network mk-pause-698949: {Iface:virbr3 ExpiryTime:2024-08-19 14:06:10 +0000 UTC Type:0 Mac:52:54:00:c3:72:58 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:pause-698949 Clientid:01:52:54:00:c3:72:58}
	I0819 13:07:32.505732 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined IP address 192.168.39.221 and MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:32.505957 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHHostname
	I0819 13:07:32.508189 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:32.508558 2136755 main.go:141] libmachine: (pause-698949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:72:58", ip: ""} in network mk-pause-698949: {Iface:virbr3 ExpiryTime:2024-08-19 14:06:10 +0000 UTC Type:0 Mac:52:54:00:c3:72:58 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:pause-698949 Clientid:01:52:54:00:c3:72:58}
	I0819 13:07:32.508600 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined IP address 192.168.39.221 and MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:32.508705 2136755 provision.go:143] copyHostCerts
	I0819 13:07:32.508783 2136755 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:07:32.508802 2136755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:07:32.508857 2136755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:07:32.508970 2136755 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:07:32.508980 2136755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:07:32.509001 2136755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:07:32.509075 2136755 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:07:32.509082 2136755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:07:32.509100 2136755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:07:32.509155 2136755 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.pause-698949 san=[127.0.0.1 192.168.39.221 localhost minikube pause-698949]
	I0819 13:07:32.610370 2136755 provision.go:177] copyRemoteCerts
	I0819 13:07:32.610436 2136755 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:07:32.610464 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHHostname
	I0819 13:07:32.613370 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:32.613827 2136755 main.go:141] libmachine: (pause-698949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:72:58", ip: ""} in network mk-pause-698949: {Iface:virbr3 ExpiryTime:2024-08-19 14:06:10 +0000 UTC Type:0 Mac:52:54:00:c3:72:58 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:pause-698949 Clientid:01:52:54:00:c3:72:58}
	I0819 13:07:32.613856 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined IP address 192.168.39.221 and MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:32.614058 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHPort
	I0819 13:07:32.614303 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHKeyPath
	I0819 13:07:32.614496 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHUsername
	I0819 13:07:32.614667 2136755 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/pause-698949/id_rsa Username:docker}
	I0819 13:07:32.703607 2136755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:07:32.733298 2136755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:07:32.758289 2136755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 13:07:32.788748 2136755 provision.go:87] duration metric: took 286.941171ms to configureAuth
	I0819 13:07:32.788782 2136755 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:07:32.788993 2136755 config.go:182] Loaded profile config "pause-698949": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:07:32.789065 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHHostname
	I0819 13:07:32.792849 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:32.793391 2136755 main.go:141] libmachine: (pause-698949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:72:58", ip: ""} in network mk-pause-698949: {Iface:virbr3 ExpiryTime:2024-08-19 14:06:10 +0000 UTC Type:0 Mac:52:54:00:c3:72:58 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:pause-698949 Clientid:01:52:54:00:c3:72:58}
	I0819 13:07:32.793427 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined IP address 192.168.39.221 and MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:32.793815 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHPort
	I0819 13:07:32.794055 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHKeyPath
	I0819 13:07:32.794241 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHKeyPath
	I0819 13:07:32.794421 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHUsername
	I0819 13:07:32.794660 2136755 main.go:141] libmachine: Using SSH client type: native
	I0819 13:07:32.794908 2136755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0819 13:07:32.794934 2136755 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:07:38.432357 2136755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:07:38.432385 2136755 machine.go:96] duration metric: took 6.327491697s to provisionDockerMachine
	I0819 13:07:38.432400 2136755 start.go:293] postStartSetup for "pause-698949" (driver="kvm2")
	I0819 13:07:38.432412 2136755 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:07:38.432439 2136755 main.go:141] libmachine: (pause-698949) Calling .DriverName
	I0819 13:07:38.432785 2136755 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:07:38.432815 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHHostname
	I0819 13:07:38.435970 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:38.436442 2136755 main.go:141] libmachine: (pause-698949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:72:58", ip: ""} in network mk-pause-698949: {Iface:virbr3 ExpiryTime:2024-08-19 14:06:10 +0000 UTC Type:0 Mac:52:54:00:c3:72:58 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:pause-698949 Clientid:01:52:54:00:c3:72:58}
	I0819 13:07:38.436470 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined IP address 192.168.39.221 and MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:38.436683 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHPort
	I0819 13:07:38.436889 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHKeyPath
	I0819 13:07:38.437037 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHUsername
	I0819 13:07:38.437189 2136755 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/pause-698949/id_rsa Username:docker}
	I0819 13:07:38.535893 2136755 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:07:38.542013 2136755 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:07:38.542049 2136755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:07:38.542121 2136755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:07:38.542215 2136755 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:07:38.542347 2136755 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:07:38.556300 2136755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:07:38.584929 2136755 start.go:296] duration metric: took 152.492448ms for postStartSetup
	I0819 13:07:38.584981 2136755 fix.go:56] duration metric: took 6.507528995s for fixHost
	I0819 13:07:38.585011 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHHostname
	I0819 13:07:38.588294 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:38.588711 2136755 main.go:141] libmachine: (pause-698949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:72:58", ip: ""} in network mk-pause-698949: {Iface:virbr3 ExpiryTime:2024-08-19 14:06:10 +0000 UTC Type:0 Mac:52:54:00:c3:72:58 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:pause-698949 Clientid:01:52:54:00:c3:72:58}
	I0819 13:07:38.588746 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined IP address 192.168.39.221 and MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:38.588902 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHPort
	I0819 13:07:38.589101 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHKeyPath
	I0819 13:07:38.589321 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHKeyPath
	I0819 13:07:38.589478 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHUsername
	I0819 13:07:38.589658 2136755 main.go:141] libmachine: Using SSH client type: native
	I0819 13:07:38.589880 2136755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0819 13:07:38.589892 2136755 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:07:38.710596 2136755 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724072858.674412816
	
	I0819 13:07:38.710626 2136755 fix.go:216] guest clock: 1724072858.674412816
	I0819 13:07:38.710637 2136755 fix.go:229] Guest: 2024-08-19 13:07:38.674412816 +0000 UTC Remote: 2024-08-19 13:07:38.584986077 +0000 UTC m=+11.861406348 (delta=89.426739ms)
	I0819 13:07:38.710669 2136755 fix.go:200] guest clock delta is within tolerance: 89.426739ms
	I0819 13:07:38.710677 2136755 start.go:83] releasing machines lock for "pause-698949", held for 6.633262999s
	I0819 13:07:38.710705 2136755 main.go:141] libmachine: (pause-698949) Calling .DriverName
	I0819 13:07:38.710988 2136755 main.go:141] libmachine: (pause-698949) Calling .GetIP
	I0819 13:07:38.714358 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:38.714885 2136755 main.go:141] libmachine: (pause-698949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:72:58", ip: ""} in network mk-pause-698949: {Iface:virbr3 ExpiryTime:2024-08-19 14:06:10 +0000 UTC Type:0 Mac:52:54:00:c3:72:58 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:pause-698949 Clientid:01:52:54:00:c3:72:58}
	I0819 13:07:38.714928 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined IP address 192.168.39.221 and MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:38.715137 2136755 main.go:141] libmachine: (pause-698949) Calling .DriverName
	I0819 13:07:38.715735 2136755 main.go:141] libmachine: (pause-698949) Calling .DriverName
	I0819 13:07:38.715958 2136755 main.go:141] libmachine: (pause-698949) Calling .DriverName
	I0819 13:07:38.716102 2136755 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:07:38.716178 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHHostname
	I0819 13:07:38.716207 2136755 ssh_runner.go:195] Run: cat /version.json
	I0819 13:07:38.716224 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHHostname
	I0819 13:07:38.719244 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:38.719480 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:38.719625 2136755 main.go:141] libmachine: (pause-698949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:72:58", ip: ""} in network mk-pause-698949: {Iface:virbr3 ExpiryTime:2024-08-19 14:06:10 +0000 UTC Type:0 Mac:52:54:00:c3:72:58 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:pause-698949 Clientid:01:52:54:00:c3:72:58}
	I0819 13:07:38.719650 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined IP address 192.168.39.221 and MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:38.719826 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHPort
	I0819 13:07:38.719979 2136755 main.go:141] libmachine: (pause-698949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:72:58", ip: ""} in network mk-pause-698949: {Iface:virbr3 ExpiryTime:2024-08-19 14:06:10 +0000 UTC Type:0 Mac:52:54:00:c3:72:58 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:pause-698949 Clientid:01:52:54:00:c3:72:58}
	I0819 13:07:38.720012 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHKeyPath
	I0819 13:07:38.720007 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined IP address 192.168.39.221 and MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:38.720216 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHUsername
	I0819 13:07:38.720318 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHPort
	I0819 13:07:38.720388 2136755 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/pause-698949/id_rsa Username:docker}
	I0819 13:07:38.720495 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHKeyPath
	I0819 13:07:38.720673 2136755 main.go:141] libmachine: (pause-698949) Calling .GetSSHUsername
	I0819 13:07:38.720818 2136755 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/pause-698949/id_rsa Username:docker}
	I0819 13:07:38.805951 2136755 ssh_runner.go:195] Run: systemctl --version
	I0819 13:07:38.828750 2136755 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:07:38.991043 2136755 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:07:39.000174 2136755 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:07:39.000279 2136755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:07:39.011011 2136755 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 13:07:39.011037 2136755 start.go:495] detecting cgroup driver to use...
	I0819 13:07:39.011125 2136755 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:07:39.028645 2136755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:07:39.043664 2136755 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:07:39.043727 2136755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:07:39.057642 2136755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:07:39.071807 2136755 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:07:39.214221 2136755 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:07:39.359174 2136755 docker.go:233] disabling docker service ...
	I0819 13:07:39.359277 2136755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:07:39.377301 2136755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:07:39.391964 2136755 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:07:39.533961 2136755 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:07:39.671823 2136755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:07:39.686493 2136755 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:07:39.708914 2136755 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:07:39.708982 2136755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:07:39.723282 2136755 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:07:39.723369 2136755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:07:39.734672 2136755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:07:39.746175 2136755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:07:39.757750 2136755 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:07:39.769169 2136755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:07:39.780028 2136755 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:07:39.791657 2136755 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:07:39.803696 2136755 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:07:39.814637 2136755 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:07:39.825016 2136755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:07:39.985205 2136755 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:07:40.730368 2136755 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:07:40.730450 2136755 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:07:40.735863 2136755 start.go:563] Will wait 60s for crictl version
	I0819 13:07:40.735930 2136755 ssh_runner.go:195] Run: which crictl
	I0819 13:07:40.740851 2136755 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:07:40.785495 2136755 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:07:40.785576 2136755 ssh_runner.go:195] Run: crio --version
	I0819 13:07:40.813273 2136755 ssh_runner.go:195] Run: crio --version
	I0819 13:07:40.850454 2136755 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:07:40.851467 2136755 main.go:141] libmachine: (pause-698949) Calling .GetIP
	I0819 13:07:40.854284 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:40.854698 2136755 main.go:141] libmachine: (pause-698949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:72:58", ip: ""} in network mk-pause-698949: {Iface:virbr3 ExpiryTime:2024-08-19 14:06:10 +0000 UTC Type:0 Mac:52:54:00:c3:72:58 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:pause-698949 Clientid:01:52:54:00:c3:72:58}
	I0819 13:07:40.854729 2136755 main.go:141] libmachine: (pause-698949) DBG | domain pause-698949 has defined IP address 192.168.39.221 and MAC address 52:54:00:c3:72:58 in network mk-pause-698949
	I0819 13:07:40.854913 2136755 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 13:07:40.859750 2136755 kubeadm.go:883] updating cluster {Name:pause-698949 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-698949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:07:40.859891 2136755 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:07:40.859934 2136755 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:07:40.906252 2136755 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:07:40.906284 2136755 crio.go:433] Images already preloaded, skipping extraction
	I0819 13:07:40.906348 2136755 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:07:40.941590 2136755 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:07:40.941614 2136755 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:07:40.941622 2136755 kubeadm.go:934] updating node { 192.168.39.221 8443 v1.31.0 crio true true} ...
	I0819 13:07:40.941742 2136755 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-698949 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-698949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:07:40.941830 2136755 ssh_runner.go:195] Run: crio config
	I0819 13:07:40.997944 2136755 cni.go:84] Creating CNI manager for ""
	I0819 13:07:40.997982 2136755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:07:40.998002 2136755 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:07:40.998032 2136755 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.221 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-698949 NodeName:pause-698949 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:07:40.998255 2136755 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-698949"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:07:40.998353 2136755 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:07:41.008988 2136755 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:07:41.009061 2136755 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:07:41.019249 2136755 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0819 13:07:41.036180 2136755 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:07:41.052957 2136755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0819 13:07:41.071507 2136755 ssh_runner.go:195] Run: grep 192.168.39.221	control-plane.minikube.internal$ /etc/hosts
	I0819 13:07:41.075701 2136755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:07:41.214830 2136755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:07:41.231699 2136755 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/pause-698949 for IP: 192.168.39.221
	I0819 13:07:41.231740 2136755 certs.go:194] generating shared ca certs ...
	I0819 13:07:41.231772 2136755 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:07:41.231953 2136755 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:07:41.232018 2136755 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:07:41.232032 2136755 certs.go:256] generating profile certs ...
	I0819 13:07:41.232158 2136755 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/pause-698949/client.key
	I0819 13:07:41.232270 2136755 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/pause-698949/apiserver.key.bf602192
	I0819 13:07:41.232344 2136755 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/pause-698949/proxy-client.key
	I0819 13:07:41.232479 2136755 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:07:41.232509 2136755 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:07:41.232515 2136755 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:07:41.232542 2136755 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:07:41.232566 2136755 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:07:41.232587 2136755 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:07:41.232626 2136755 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:07:41.233253 2136755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:07:41.257846 2136755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:07:41.283661 2136755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:07:41.309906 2136755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:07:41.333789 2136755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/pause-698949/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 13:07:41.362572 2136755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/pause-698949/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:07:41.388553 2136755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/pause-698949/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:07:41.427963 2136755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/pause-698949/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:07:41.507219 2136755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:07:41.559855 2136755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:07:41.617925 2136755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:07:41.731957 2136755 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:07:41.782842 2136755 ssh_runner.go:195] Run: openssl version
	I0819 13:07:41.810187 2136755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:07:41.842576 2136755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:07:41.849726 2136755 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:07:41.849797 2136755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:07:41.887181 2136755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:07:41.962562 2136755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:07:42.019394 2136755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:07:42.037004 2136755 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:07:42.037077 2136755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:07:42.058881 2136755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:07:42.100975 2136755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:07:42.177166 2136755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:07:42.190772 2136755 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:07:42.190860 2136755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:07:42.219673 2136755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:07:42.286610 2136755 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:07:42.300583 2136755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:07:42.325104 2136755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:07:42.338124 2136755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:07:42.351561 2136755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:07:42.364620 2136755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:07:42.375112 2136755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:07:42.383865 2136755 kubeadm.go:392] StartCluster: {Name:pause-698949 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-698949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:07:42.384027 2136755 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:07:42.384089 2136755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:07:42.489416 2136755 cri.go:89] found id: "c2fa86322f82d3f432353651add895d325fee24f5e284a9bee2086cc5ca8fe8e"
	I0819 13:07:42.489446 2136755 cri.go:89] found id: "632b98832078d30600ae52a6ef430ddff7a359a6d6169b8320a77f760bf5c6d5"
	I0819 13:07:42.489452 2136755 cri.go:89] found id: "4609c6d508aea5ce32d9cf4679a41b03f257e93f73573f33b1e52b679e3a817b"
	I0819 13:07:42.489456 2136755 cri.go:89] found id: "5655595347a7b4aef04e94692999706f5473f4b0d8b7ec275657af57a8d0c4b2"
	I0819 13:07:42.489460 2136755 cri.go:89] found id: "0a1eb2b7e83657df282c0d42fe8bd7b6ecdb7628e73c2cd3b855038bb752b59c"
	I0819 13:07:42.489465 2136755 cri.go:89] found id: "df7d37cc66d0b4c32f1cfab67ec3eb6e3e3282f30786379f65c4e627ff57c19f"
	I0819 13:07:42.489482 2136755 cri.go:89] found id: "ce638f8c0034b674607f65311177034a53a30fe828adf63a2feb7f222af4baf1"
	I0819 13:07:42.489487 2136755 cri.go:89] found id: "cfc80624b5ea0ffc04651c51f4b7174122c9bf613c6e1e5af45e991e714d06d9"
	I0819 13:07:42.489491 2136755 cri.go:89] found id: "95d04ede4873d6842bfbe6f702bb74721a2955ed232087723edc20084b6fd4a6"
	I0819 13:07:42.489503 2136755 cri.go:89] found id: ""
	I0819 13:07:42.489563 2136755 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-698949 -n pause-698949
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-698949 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-698949 logs -n 25: (1.567699061s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | status kubelet --all --full                          |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | cat kubelet --no-pager                               |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo journalctl                       | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | -xeu kubelet --all --full                            |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo cat                              | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo cat                              | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | status docker --all --full                           |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | cat docker --no-pager                                |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo cat                              | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo docker                           | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | status cri-docker --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | cat cri-docker --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo cat                              | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo cat                              | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo                                  | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | status containerd --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | cat containerd --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo cat                              | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo cat                              | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo containerd                       | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | config dump                                          |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | status crio --all --full                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | cat crio --no-pager                                  |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo find                             | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo crio                             | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p auto-649777                                       | auto-649777   | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	| start   | -p calico-649777 --memory=3072                       | calico-649777 | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | --alsologtostderr --wait=true                        |               |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |               |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |               |         |         |                     |                     |
	|         | --container-runtime=crio                             |               |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 13:08:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 13:08:10.111591 2138610 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:08:10.111720 2138610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:08:10.111729 2138610 out.go:358] Setting ErrFile to fd 2...
	I0819 13:08:10.111733 2138610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:08:10.111976 2138610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:08:10.112633 2138610 out.go:352] Setting JSON to false
	I0819 13:08:10.114315 2138610 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":165040,"bootTime":1723907850,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:08:10.114394 2138610 start.go:139] virtualization: kvm guest
	I0819 13:08:10.116558 2138610 out.go:177] * [calico-649777] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:08:10.117800 2138610 notify.go:220] Checking for updates...
	I0819 13:08:10.117813 2138610 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:08:10.118907 2138610 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:08:10.120093 2138610 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:08:10.121304 2138610 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:08:10.122425 2138610 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:08:10.123469 2138610 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:08:10.125067 2138610 config.go:182] Loaded profile config "kindnet-649777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:08:10.125185 2138610 config.go:182] Loaded profile config "kubernetes-upgrade-227337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:08:10.125313 2138610 config.go:182] Loaded profile config "pause-698949": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:08:10.125411 2138610 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:08:10.167123 2138610 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 13:08:10.168268 2138610 start.go:297] selected driver: kvm2
	I0819 13:08:10.168291 2138610 start.go:901] validating driver "kvm2" against <nil>
	I0819 13:08:10.168307 2138610 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:08:10.169308 2138610 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:08:10.169414 2138610 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:08:10.187677 2138610 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:08:10.187726 2138610 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 13:08:10.187981 2138610 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:08:10.188047 2138610 cni.go:84] Creating CNI manager for "calico"
	I0819 13:08:10.188059 2138610 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0819 13:08:10.188116 2138610 start.go:340] cluster config:
	{Name:calico-649777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-649777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:08:10.188238 2138610 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:08:10.189872 2138610 out.go:177] * Starting "calico-649777" primary control-plane node in "calico-649777" cluster
	I0819 13:08:10.190921 2138610 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:08:10.190974 2138610 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:08:10.190990 2138610 cache.go:56] Caching tarball of preloaded images
	I0819 13:08:10.191113 2138610 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:08:10.191130 2138610 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 13:08:10.191267 2138610 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/config.json ...
	I0819 13:08:10.191298 2138610 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/config.json: {Name:mk32a9fc0e0828ab4a5e7200d7fdd3233b3538d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:08:10.191514 2138610 start.go:360] acquireMachinesLock for calico-649777: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:08:10.191575 2138610 start.go:364] duration metric: took 38.723µs to acquireMachinesLock for "calico-649777"
	I0819 13:08:10.191611 2138610 start.go:93] Provisioning new machine with config: &{Name:calico-649777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:calico-649777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:08:10.191701 2138610 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 13:08:10.442830 2137182 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.634800535s)
	I0819 13:08:10.442868 2137182 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:08:10.442938 2137182 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:08:10.448130 2137182 start.go:563] Will wait 60s for crictl version
	I0819 13:08:10.448208 2137182 ssh_runner.go:195] Run: which crictl
	I0819 13:08:10.452209 2137182 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:08:10.496752 2137182 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:08:10.496852 2137182 ssh_runner.go:195] Run: crio --version
	I0819 13:08:10.528228 2137182 ssh_runner.go:195] Run: crio --version
	I0819 13:08:10.564672 2137182 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:08:10.565909 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetIP
	I0819 13:08:10.569220 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:08:10.569656 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:07:24 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:08:10.569693 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:08:10.569881 2137182 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 13:08:10.574545 2137182 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-227337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-227337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.233 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:08:10.574681 2137182 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:08:10.574756 2137182 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:08:10.620791 2137182 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:08:10.620818 2137182 crio.go:433] Images already preloaded, skipping extraction
	I0819 13:08:10.620879 2137182 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:08:10.672598 2137182 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:08:10.672631 2137182 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:08:10.672642 2137182 kubeadm.go:934] updating node { 192.168.50.233 8443 v1.31.0 crio true true} ...
	I0819 13:08:10.672793 2137182 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-227337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-227337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:08:10.672884 2137182 ssh_runner.go:195] Run: crio config
	I0819 13:08:10.724535 2137182 cni.go:84] Creating CNI manager for ""
	I0819 13:08:10.724559 2137182 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:08:10.724573 2137182 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:08:10.724601 2137182 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.233 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-227337 NodeName:kubernetes-upgrade-227337 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:08:10.724795 2137182 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.233
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-227337"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:08:10.724863 2137182 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:08:10.735564 2137182 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:08:10.735636 2137182 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:08:10.746552 2137182 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0819 13:08:10.765518 2137182 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:08:10.784833 2137182 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 13:08:10.803538 2137182 ssh_runner.go:195] Run: grep 192.168.50.233	control-plane.minikube.internal$ /etc/hosts
	I0819 13:08:10.807687 2137182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:08:08.852395 2136755 pod_ready.go:103] pod "etcd-pause-698949" in "kube-system" namespace has status "Ready":"False"
	I0819 13:08:10.851421 2136755 pod_ready.go:93] pod "etcd-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:10.851455 2136755 pod_ready.go:82] duration metric: took 9.006607691s for pod "etcd-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:10.851469 2136755 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.874034 2136755 pod_ready.go:93] pod "kube-apiserver-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:11.874067 2136755 pod_ready.go:82] duration metric: took 1.022588858s for pod "kube-apiserver-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.874083 2136755 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.882599 2136755 pod_ready.go:93] pod "kube-controller-manager-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:11.882633 2136755 pod_ready.go:82] duration metric: took 8.538806ms for pod "kube-controller-manager-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.882650 2136755 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5xzwk" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.890728 2136755 pod_ready.go:93] pod "kube-proxy-5xzwk" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:11.890809 2136755 pod_ready.go:82] duration metric: took 8.146995ms for pod "kube-proxy-5xzwk" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.890837 2136755 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.897696 2136755 pod_ready.go:93] pod "kube-scheduler-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:11.897718 2136755 pod_ready.go:82] duration metric: took 6.863451ms for pod "kube-scheduler-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.897728 2136755 pod_ready.go:39] duration metric: took 10.070257878s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:08:11.897753 2136755 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:08:11.915657 2136755 ops.go:34] apiserver oom_adj: -16
	I0819 13:08:11.915681 2136755 kubeadm.go:597] duration metric: took 29.315460715s to restartPrimaryControlPlane
	I0819 13:08:11.915692 2136755 kubeadm.go:394] duration metric: took 29.531838577s to StartCluster
	I0819 13:08:11.915719 2136755 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:08:11.915811 2136755 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:08:11.917560 2136755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:08:11.917857 2136755 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:08:11.917907 2136755 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:08:11.918097 2136755 config.go:182] Loaded profile config "pause-698949": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:08:11.919310 2136755 out.go:177] * Verifying Kubernetes components...
	I0819 13:08:11.920088 2136755 out.go:177] * Enabled addons: 
	I0819 13:08:10.193650 2138610 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 13:08:10.193794 2138610 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:08:10.193841 2138610 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:08:10.210946 2138610 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37423
	I0819 13:08:10.211393 2138610 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:08:10.212026 2138610 main.go:141] libmachine: Using API Version  1
	I0819 13:08:10.212049 2138610 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:08:10.212480 2138610 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:08:10.212690 2138610 main.go:141] libmachine: (calico-649777) Calling .GetMachineName
	I0819 13:08:10.212849 2138610 main.go:141] libmachine: (calico-649777) Calling .DriverName
	I0819 13:08:10.212997 2138610 start.go:159] libmachine.API.Create for "calico-649777" (driver="kvm2")
	I0819 13:08:10.213028 2138610 client.go:168] LocalClient.Create starting
	I0819 13:08:10.213062 2138610 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem
	I0819 13:08:10.213109 2138610 main.go:141] libmachine: Decoding PEM data...
	I0819 13:08:10.213133 2138610 main.go:141] libmachine: Parsing certificate...
	I0819 13:08:10.213206 2138610 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem
	I0819 13:08:10.213235 2138610 main.go:141] libmachine: Decoding PEM data...
	I0819 13:08:10.213256 2138610 main.go:141] libmachine: Parsing certificate...
	I0819 13:08:10.213288 2138610 main.go:141] libmachine: Running pre-create checks...
	I0819 13:08:10.213298 2138610 main.go:141] libmachine: (calico-649777) Calling .PreCreateCheck
	I0819 13:08:10.213647 2138610 main.go:141] libmachine: (calico-649777) Calling .GetConfigRaw
	I0819 13:08:10.214070 2138610 main.go:141] libmachine: Creating machine...
	I0819 13:08:10.214088 2138610 main.go:141] libmachine: (calico-649777) Calling .Create
	I0819 13:08:10.214233 2138610 main.go:141] libmachine: (calico-649777) Creating KVM machine...
	I0819 13:08:10.215431 2138610 main.go:141] libmachine: (calico-649777) DBG | found existing default KVM network
	I0819 13:08:10.216858 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:10.216674 2138633 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fd:3a:57} reservation:<nil>}
	I0819 13:08:10.217633 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:10.217550 2138633 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ab:44:aa} reservation:<nil>}
	I0819 13:08:10.218770 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:10.218661 2138633 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002bcf40}
	I0819 13:08:10.218802 2138610 main.go:141] libmachine: (calico-649777) DBG | created network xml: 
	I0819 13:08:10.218816 2138610 main.go:141] libmachine: (calico-649777) DBG | <network>
	I0819 13:08:10.218832 2138610 main.go:141] libmachine: (calico-649777) DBG |   <name>mk-calico-649777</name>
	I0819 13:08:10.218841 2138610 main.go:141] libmachine: (calico-649777) DBG |   <dns enable='no'/>
	I0819 13:08:10.218864 2138610 main.go:141] libmachine: (calico-649777) DBG |   
	I0819 13:08:10.218878 2138610 main.go:141] libmachine: (calico-649777) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0819 13:08:10.218888 2138610 main.go:141] libmachine: (calico-649777) DBG |     <dhcp>
	I0819 13:08:10.218904 2138610 main.go:141] libmachine: (calico-649777) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0819 13:08:10.218926 2138610 main.go:141] libmachine: (calico-649777) DBG |     </dhcp>
	I0819 13:08:10.218937 2138610 main.go:141] libmachine: (calico-649777) DBG |   </ip>
	I0819 13:08:10.218943 2138610 main.go:141] libmachine: (calico-649777) DBG |   
	I0819 13:08:10.218953 2138610 main.go:141] libmachine: (calico-649777) DBG | </network>
	I0819 13:08:10.218959 2138610 main.go:141] libmachine: (calico-649777) DBG | 
	I0819 13:08:10.223795 2138610 main.go:141] libmachine: (calico-649777) DBG | trying to create private KVM network mk-calico-649777 192.168.61.0/24...
	I0819 13:08:10.302709 2138610 main.go:141] libmachine: (calico-649777) DBG | private KVM network mk-calico-649777 192.168.61.0/24 created
	I0819 13:08:10.302863 2138610 main.go:141] libmachine: (calico-649777) Setting up store path in /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777 ...
	I0819 13:08:10.302889 2138610 main.go:141] libmachine: (calico-649777) Building disk image from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 13:08:10.302943 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:10.302810 2138633 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:08:10.302972 2138610 main.go:141] libmachine: (calico-649777) Downloading /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 13:08:10.613581 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:10.613440 2138633 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777/id_rsa...
	I0819 13:08:10.704703 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:10.704539 2138633 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777/calico-649777.rawdisk...
	I0819 13:08:10.705360 2138610 main.go:141] libmachine: (calico-649777) DBG | Writing magic tar header
	I0819 13:08:10.705383 2138610 main.go:141] libmachine: (calico-649777) DBG | Writing SSH key tar header
	I0819 13:08:10.705990 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:10.705906 2138633 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777 ...
	I0819 13:08:10.706083 2138610 main.go:141] libmachine: (calico-649777) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777
	I0819 13:08:10.706110 2138610 main.go:141] libmachine: (calico-649777) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines
	I0819 13:08:10.706127 2138610 main.go:141] libmachine: (calico-649777) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777 (perms=drwx------)
	I0819 13:08:10.706142 2138610 main.go:141] libmachine: (calico-649777) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:08:10.706157 2138610 main.go:141] libmachine: (calico-649777) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines (perms=drwxr-xr-x)
	I0819 13:08:10.706173 2138610 main.go:141] libmachine: (calico-649777) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube (perms=drwxr-xr-x)
	I0819 13:08:10.706209 2138610 main.go:141] libmachine: (calico-649777) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935 (perms=drwxrwxr-x)
	I0819 13:08:10.706224 2138610 main.go:141] libmachine: (calico-649777) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935
	I0819 13:08:10.706242 2138610 main.go:141] libmachine: (calico-649777) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 13:08:10.706256 2138610 main.go:141] libmachine: (calico-649777) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 13:08:10.706271 2138610 main.go:141] libmachine: (calico-649777) DBG | Checking permissions on dir: /home/jenkins
	I0819 13:08:10.706285 2138610 main.go:141] libmachine: (calico-649777) DBG | Checking permissions on dir: /home
	I0819 13:08:10.706296 2138610 main.go:141] libmachine: (calico-649777) DBG | Skipping /home - not owner
	I0819 13:08:10.706313 2138610 main.go:141] libmachine: (calico-649777) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 13:08:10.706326 2138610 main.go:141] libmachine: (calico-649777) Creating domain...
	I0819 13:08:10.707271 2138610 main.go:141] libmachine: (calico-649777) define libvirt domain using xml: 
	I0819 13:08:10.707292 2138610 main.go:141] libmachine: (calico-649777) <domain type='kvm'>
	I0819 13:08:10.707301 2138610 main.go:141] libmachine: (calico-649777)   <name>calico-649777</name>
	I0819 13:08:10.707309 2138610 main.go:141] libmachine: (calico-649777)   <memory unit='MiB'>3072</memory>
	I0819 13:08:10.707318 2138610 main.go:141] libmachine: (calico-649777)   <vcpu>2</vcpu>
	I0819 13:08:10.707331 2138610 main.go:141] libmachine: (calico-649777)   <features>
	I0819 13:08:10.707343 2138610 main.go:141] libmachine: (calico-649777)     <acpi/>
	I0819 13:08:10.707365 2138610 main.go:141] libmachine: (calico-649777)     <apic/>
	I0819 13:08:10.707378 2138610 main.go:141] libmachine: (calico-649777)     <pae/>
	I0819 13:08:10.707384 2138610 main.go:141] libmachine: (calico-649777)     
	I0819 13:08:10.707396 2138610 main.go:141] libmachine: (calico-649777)   </features>
	I0819 13:08:10.707406 2138610 main.go:141] libmachine: (calico-649777)   <cpu mode='host-passthrough'>
	I0819 13:08:10.707413 2138610 main.go:141] libmachine: (calico-649777)   
	I0819 13:08:10.707423 2138610 main.go:141] libmachine: (calico-649777)   </cpu>
	I0819 13:08:10.707431 2138610 main.go:141] libmachine: (calico-649777)   <os>
	I0819 13:08:10.707441 2138610 main.go:141] libmachine: (calico-649777)     <type>hvm</type>
	I0819 13:08:10.707451 2138610 main.go:141] libmachine: (calico-649777)     <boot dev='cdrom'/>
	I0819 13:08:10.707460 2138610 main.go:141] libmachine: (calico-649777)     <boot dev='hd'/>
	I0819 13:08:10.707469 2138610 main.go:141] libmachine: (calico-649777)     <bootmenu enable='no'/>
	I0819 13:08:10.707479 2138610 main.go:141] libmachine: (calico-649777)   </os>
	I0819 13:08:10.707491 2138610 main.go:141] libmachine: (calico-649777)   <devices>
	I0819 13:08:10.707502 2138610 main.go:141] libmachine: (calico-649777)     <disk type='file' device='cdrom'>
	I0819 13:08:10.707520 2138610 main.go:141] libmachine: (calico-649777)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777/boot2docker.iso'/>
	I0819 13:08:10.707536 2138610 main.go:141] libmachine: (calico-649777)       <target dev='hdc' bus='scsi'/>
	I0819 13:08:10.707547 2138610 main.go:141] libmachine: (calico-649777)       <readonly/>
	I0819 13:08:10.707557 2138610 main.go:141] libmachine: (calico-649777)     </disk>
	I0819 13:08:10.707566 2138610 main.go:141] libmachine: (calico-649777)     <disk type='file' device='disk'>
	I0819 13:08:10.707586 2138610 main.go:141] libmachine: (calico-649777)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 13:08:10.707602 2138610 main.go:141] libmachine: (calico-649777)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777/calico-649777.rawdisk'/>
	I0819 13:08:10.707617 2138610 main.go:141] libmachine: (calico-649777)       <target dev='hda' bus='virtio'/>
	I0819 13:08:10.707629 2138610 main.go:141] libmachine: (calico-649777)     </disk>
	I0819 13:08:10.707638 2138610 main.go:141] libmachine: (calico-649777)     <interface type='network'>
	I0819 13:08:10.707650 2138610 main.go:141] libmachine: (calico-649777)       <source network='mk-calico-649777'/>
	I0819 13:08:10.707668 2138610 main.go:141] libmachine: (calico-649777)       <model type='virtio'/>
	I0819 13:08:10.707678 2138610 main.go:141] libmachine: (calico-649777)     </interface>
	I0819 13:08:10.707685 2138610 main.go:141] libmachine: (calico-649777)     <interface type='network'>
	I0819 13:08:10.707697 2138610 main.go:141] libmachine: (calico-649777)       <source network='default'/>
	I0819 13:08:10.707706 2138610 main.go:141] libmachine: (calico-649777)       <model type='virtio'/>
	I0819 13:08:10.707717 2138610 main.go:141] libmachine: (calico-649777)     </interface>
	I0819 13:08:10.707728 2138610 main.go:141] libmachine: (calico-649777)     <serial type='pty'>
	I0819 13:08:10.707736 2138610 main.go:141] libmachine: (calico-649777)       <target port='0'/>
	I0819 13:08:10.707746 2138610 main.go:141] libmachine: (calico-649777)     </serial>
	I0819 13:08:10.707755 2138610 main.go:141] libmachine: (calico-649777)     <console type='pty'>
	I0819 13:08:10.707766 2138610 main.go:141] libmachine: (calico-649777)       <target type='serial' port='0'/>
	I0819 13:08:10.707774 2138610 main.go:141] libmachine: (calico-649777)     </console>
	I0819 13:08:10.707795 2138610 main.go:141] libmachine: (calico-649777)     <rng model='virtio'>
	I0819 13:08:10.707806 2138610 main.go:141] libmachine: (calico-649777)       <backend model='random'>/dev/random</backend>
	I0819 13:08:10.707815 2138610 main.go:141] libmachine: (calico-649777)     </rng>
	I0819 13:08:10.707823 2138610 main.go:141] libmachine: (calico-649777)     
	I0819 13:08:10.707830 2138610 main.go:141] libmachine: (calico-649777)     
	I0819 13:08:10.707838 2138610 main.go:141] libmachine: (calico-649777)   </devices>
	I0819 13:08:10.707848 2138610 main.go:141] libmachine: (calico-649777) </domain>
	I0819 13:08:10.707857 2138610 main.go:141] libmachine: (calico-649777) 
	I0819 13:08:10.713033 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:8a:a4:66 in network default
	I0819 13:08:10.713733 2138610 main.go:141] libmachine: (calico-649777) Ensuring networks are active...
	I0819 13:08:10.713762 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:10.714545 2138610 main.go:141] libmachine: (calico-649777) Ensuring network default is active
	I0819 13:08:10.714851 2138610 main.go:141] libmachine: (calico-649777) Ensuring network mk-calico-649777 is active
	I0819 13:08:10.715397 2138610 main.go:141] libmachine: (calico-649777) Getting domain xml...
	I0819 13:08:10.716315 2138610 main.go:141] libmachine: (calico-649777) Creating domain...
	I0819 13:08:12.097802 2138610 main.go:141] libmachine: (calico-649777) Waiting to get IP...
	I0819 13:08:12.098806 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:12.099358 2138610 main.go:141] libmachine: (calico-649777) DBG | unable to find current IP address of domain calico-649777 in network mk-calico-649777
	I0819 13:08:12.099389 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:12.099330 2138633 retry.go:31] will retry after 274.81844ms: waiting for machine to come up
	I0819 13:08:12.375737 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:12.376449 2138610 main.go:141] libmachine: (calico-649777) DBG | unable to find current IP address of domain calico-649777 in network mk-calico-649777
	I0819 13:08:12.376484 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:12.376394 2138633 retry.go:31] will retry after 375.15178ms: waiting for machine to come up
	I0819 13:08:12.753094 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:12.753672 2138610 main.go:141] libmachine: (calico-649777) DBG | unable to find current IP address of domain calico-649777 in network mk-calico-649777
	I0819 13:08:12.753715 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:12.753573 2138633 retry.go:31] will retry after 379.742892ms: waiting for machine to come up
	I0819 13:08:13.135380 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:13.136096 2138610 main.go:141] libmachine: (calico-649777) DBG | unable to find current IP address of domain calico-649777 in network mk-calico-649777
	I0819 13:08:13.136131 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:13.136026 2138633 retry.go:31] will retry after 474.902187ms: waiting for machine to come up
	I0819 13:08:13.613049 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:13.613588 2138610 main.go:141] libmachine: (calico-649777) DBG | unable to find current IP address of domain calico-649777 in network mk-calico-649777
	I0819 13:08:13.613625 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:13.613558 2138633 retry.go:31] will retry after 484.923411ms: waiting for machine to come up
	I0819 13:08:14.100379 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:14.100891 2138610 main.go:141] libmachine: (calico-649777) DBG | unable to find current IP address of domain calico-649777 in network mk-calico-649777
	I0819 13:08:14.100926 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:14.100851 2138633 retry.go:31] will retry after 794.279866ms: waiting for machine to come up
	I0819 13:08:14.897096 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:14.897656 2138610 main.go:141] libmachine: (calico-649777) DBG | unable to find current IP address of domain calico-649777 in network mk-calico-649777
	I0819 13:08:14.897687 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:14.897608 2138633 retry.go:31] will retry after 798.828197ms: waiting for machine to come up
	I0819 13:08:11.920712 2136755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:08:11.921239 2136755 addons.go:510] duration metric: took 3.33491ms for enable addons: enabled=[]
	I0819 13:08:12.103673 2136755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:08:12.124223 2136755 node_ready.go:35] waiting up to 6m0s for node "pause-698949" to be "Ready" ...
	I0819 13:08:12.128196 2136755 node_ready.go:49] node "pause-698949" has status "Ready":"True"
	I0819 13:08:12.128218 2136755 node_ready.go:38] duration metric: took 3.947723ms for node "pause-698949" to be "Ready" ...
	I0819 13:08:12.128231 2136755 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:08:12.133920 2136755 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5qqtm" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:12.453720 2136755 pod_ready.go:93] pod "coredns-6f6b679f8f-5qqtm" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:12.453745 2136755 pod_ready.go:82] duration metric: took 319.795906ms for pod "coredns-6f6b679f8f-5qqtm" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:12.453756 2136755 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:12.850313 2136755 pod_ready.go:93] pod "etcd-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:12.850342 2136755 pod_ready.go:82] duration metric: took 396.578671ms for pod "etcd-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:12.850355 2136755 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:13.248787 2136755 pod_ready.go:93] pod "kube-apiserver-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:13.248827 2136755 pod_ready.go:82] duration metric: took 398.453288ms for pod "kube-apiserver-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:13.248851 2136755 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:13.649613 2136755 pod_ready.go:93] pod "kube-controller-manager-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:13.649641 2136755 pod_ready.go:82] duration metric: took 400.782186ms for pod "kube-controller-manager-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:13.649652 2136755 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5xzwk" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:14.049276 2136755 pod_ready.go:93] pod "kube-proxy-5xzwk" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:14.049305 2136755 pod_ready.go:82] duration metric: took 399.645666ms for pod "kube-proxy-5xzwk" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:14.049314 2136755 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:14.448722 2136755 pod_ready.go:93] pod "kube-scheduler-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:14.448747 2136755 pod_ready.go:82] duration metric: took 399.426684ms for pod "kube-scheduler-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:14.448756 2136755 pod_ready.go:39] duration metric: took 2.320511819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:08:14.448770 2136755 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:08:14.448831 2136755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:08:14.463230 2136755 api_server.go:72] duration metric: took 2.545327543s to wait for apiserver process to appear ...
	I0819 13:08:14.463255 2136755 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:08:14.463272 2136755 api_server.go:253] Checking apiserver healthz at https://192.168.39.221:8443/healthz ...
	I0819 13:08:14.468845 2136755 api_server.go:279] https://192.168.39.221:8443/healthz returned 200:
	ok
	I0819 13:08:14.469872 2136755 api_server.go:141] control plane version: v1.31.0
	I0819 13:08:14.469898 2136755 api_server.go:131] duration metric: took 6.635694ms to wait for apiserver health ...
	I0819 13:08:14.469908 2136755 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:08:14.652536 2136755 system_pods.go:59] 6 kube-system pods found
	I0819 13:08:14.652570 2136755 system_pods.go:61] "coredns-6f6b679f8f-5qqtm" [5797c960-9b94-4164-90c5-f0b89084c23a] Running
	I0819 13:08:14.652578 2136755 system_pods.go:61] "etcd-pause-698949" [42e6e52d-9c13-4152-9fd7-b745b28078d9] Running
	I0819 13:08:14.652583 2136755 system_pods.go:61] "kube-apiserver-pause-698949" [8f1c7b7b-3cf2-461a-87b3-5c4a2b865f21] Running
	I0819 13:08:14.652588 2136755 system_pods.go:61] "kube-controller-manager-pause-698949" [e997a844-c93f-43bc-9c58-f6782c0df3a2] Running
	I0819 13:08:14.652633 2136755 system_pods.go:61] "kube-proxy-5xzwk" [5b6aacd5-988c-4114-b95d-768961297985] Running
	I0819 13:08:14.652644 2136755 system_pods.go:61] "kube-scheduler-pause-698949" [37bc080f-8d60-45ca-b3e1-05ce1acd85d6] Running
	I0819 13:08:14.652659 2136755 system_pods.go:74] duration metric: took 182.736839ms to wait for pod list to return data ...
	I0819 13:08:14.652672 2136755 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:08:14.849462 2136755 default_sa.go:45] found service account: "default"
	I0819 13:08:14.849498 2136755 default_sa.go:55] duration metric: took 196.813519ms for default service account to be created ...
	I0819 13:08:14.849511 2136755 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:08:15.053291 2136755 system_pods.go:86] 6 kube-system pods found
	I0819 13:08:15.053322 2136755 system_pods.go:89] "coredns-6f6b679f8f-5qqtm" [5797c960-9b94-4164-90c5-f0b89084c23a] Running
	I0819 13:08:15.053330 2136755 system_pods.go:89] "etcd-pause-698949" [42e6e52d-9c13-4152-9fd7-b745b28078d9] Running
	I0819 13:08:15.053336 2136755 system_pods.go:89] "kube-apiserver-pause-698949" [8f1c7b7b-3cf2-461a-87b3-5c4a2b865f21] Running
	I0819 13:08:15.053341 2136755 system_pods.go:89] "kube-controller-manager-pause-698949" [e997a844-c93f-43bc-9c58-f6782c0df3a2] Running
	I0819 13:08:15.053346 2136755 system_pods.go:89] "kube-proxy-5xzwk" [5b6aacd5-988c-4114-b95d-768961297985] Running
	I0819 13:08:15.053351 2136755 system_pods.go:89] "kube-scheduler-pause-698949" [37bc080f-8d60-45ca-b3e1-05ce1acd85d6] Running
	I0819 13:08:15.053361 2136755 system_pods.go:126] duration metric: took 203.843398ms to wait for k8s-apps to be running ...
	I0819 13:08:15.053380 2136755 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:08:15.053449 2136755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:08:15.081710 2136755 system_svc.go:56] duration metric: took 28.317442ms WaitForService to wait for kubelet
	I0819 13:08:15.081759 2136755 kubeadm.go:582] duration metric: took 3.163860746s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:08:15.081783 2136755 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:08:15.250321 2136755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:08:15.250354 2136755 node_conditions.go:123] node cpu capacity is 2
	I0819 13:08:15.250370 2136755 node_conditions.go:105] duration metric: took 168.579956ms to run NodePressure ...
	I0819 13:08:15.250385 2136755 start.go:241] waiting for startup goroutines ...
	I0819 13:08:15.250395 2136755 start.go:246] waiting for cluster config update ...
	I0819 13:08:15.250405 2136755 start.go:255] writing updated cluster config ...
	I0819 13:08:15.250800 2136755 ssh_runner.go:195] Run: rm -f paused
	I0819 13:08:15.308035 2136755 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:08:15.310037 2136755 out.go:177] * Done! kubectl is now configured to use "pause-698949" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.189299305Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:d8437ecd909ca7868fd0468b0afdb4bd6239bb36244d3be08b830ca411d6c5e1,Verbose:false,}" file="otel-collector/interceptors.go:62" id=8be97c64-0c89-4d8b-9dee-a7220ca13e58 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.189437421Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:d8437ecd909ca7868fd0468b0afdb4bd6239bb36244d3be08b830ca411d6c5e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1724072876601924655,StartedAt:1724072876710415401,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/abf3663a182e203a855f90d849323b4b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/abf3663a182e203a855f90d849323b4b/containers/kube-apiserver/f805a83f,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/v
ar/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-pause-698949_abf3663a182e203a855f90d849323b4b/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=8be97c64-0c89-4d8b-9dee-a7220ca13e58 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.190037603Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:d53a3557493dc56c9d6bd1d430a25bc76a3339be0db3a06f9eeebbeb5bf372c9,Verbose:false,}" file="otel-collector/interceptors.go:62" id=18778e82-4512-4d19-bd9c-348c86059dec name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.190167392Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:d53a3557493dc56c9d6bd1d430a25bc76a3339be0db3a06f9eeebbeb5bf372c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1724072862881862268,StartedAt:1724072863057129055,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5b6aacd5-988c-4114-b95d-768961297985/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5b6aacd5-988c-4114-b95d-768961297985/containers/kube-proxy/89989852,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/k
ubelet/pods/5b6aacd5-988c-4114-b95d-768961297985/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/5b6aacd5-988c-4114-b95d-768961297985/volumes/kubernetes.io~projected/kube-api-access-zcqvb,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-5xzwk_5b6aacd5-988c-4114-b95d-768961297985/kube-proxy/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collec
tor/interceptors.go:74" id=18778e82-4512-4d19-bd9c-348c86059dec name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.193210761Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:7065181391fb90ff9971305427aa3a65d0e3ba939fefd9ef3ebdeded8fe42b7c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=22d2fd35-2512-4ffb-b406-ced906fddc23 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.193367770Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:7065181391fb90ff9971305427aa3a65d0e3ba939fefd9ef3ebdeded8fe42b7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1724072862856998030,StartedAt:1724072862892812679,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/5797c960-9b94-4164-90c5-f0b89084c23a/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5797c960-9b94-4164-90c5-f0b89084c23a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5797c960-9b94-4164-90c5-f0b89084c23a/containers/coredns/db66866a,Readonly:false,SelinuxRelabel:false,Propagation:PRO
PAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/5797c960-9b94-4164-90c5-f0b89084c23a/volumes/kubernetes.io~projected/kube-api-access-l9sgh,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-6f6b679f8f-5qqtm_5797c960-9b94-4164-90c5-f0b89084c23a/coredns/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:965,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=22d2fd35-2512-4ffb-b406-ced906fddc23 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.242351512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bfccdb2a-71e8-408f-8326-2887bbdf8cfa name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.242450991Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bfccdb2a-71e8-408f-8326-2887bbdf8cfa name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.244146298Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2998e5e0-c929-4601-8dcb-7d139c612bab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.244834828Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072896244800828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2998e5e0-c929-4601-8dcb-7d139c612bab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.245637835Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3db3c56b-3fac-4b1d-bb52-7c842f0421f8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.245917643Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3db3c56b-3fac-4b1d-bb52-7c842f0421f8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.246309737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52ec1fdc7cc5ff1943cd55eb994800c06499e8af8a1665a26384555118e269ad,PodSandboxId:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724072876550752569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b030d2722cd7fc7cbf3079fb8772452eb3cfc08aba7f23e8307084ef9a18b7e,PodSandboxId:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724072876549752176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eee737a34c7c252b327b0113c8d573d8968bb3cf5ba8c3cc6c791aa46671ff9,PodSandboxId:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724072876513239944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8437ecd909ca7868fd0468b0afdb4bd6239bb36244d3be08b830ca411d6c5e1,PodSandboxId:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724072876509182002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53a3557493dc56c9d6bd1d430a25bc76a3339be0db3a06f9eeebbeb5bf372c9,PodSandboxId:a1728486cc7e06e1bae0434a799a23e4a02e6aa4e26f17ae9ed01ddf88fc0671,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724072862039751651,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7065181391fb90ff9971305427aa3a65d0e3ba939fefd9ef3ebdeded8fe42b7c,PodSandboxId:b9e8a7f95c456897d3c633e0b25c41b8b48d221aa1df752a5c9a6b125931fb3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724072862696629997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29a695a772bf0efa612fd6660095bb298c261c5690e3c703e997accea49e986,PodSandboxId:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724072862044758413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fa86322f82d3f432353651add895d325fee24f5e284a9bee2086cc5ca8fe8e,PodSandboxId:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724072861932188234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4609c6d508aea5ce32d9cf4679a41b03f257e93f73573f33b1e52b679e3a817b,PodSandboxId:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724072861833037609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:632b98832078d30600ae52a6ef430ddff7a359a6d6169b8320a77f760bf5c6d5,PodSandboxId:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724072861843910781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5655595347a7b4aef04e94692999706f5473f4b0d8b7ec275657af57a8d0c4b2,PodSandboxId:92819c2c42c090c823a1fea410bcc3c5b5445a318ce6983d41f5668911d29ee9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724072805285887534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1eb2b7e83657df282c0d42fe8bd7b6ecdb7628e73c2cd3b855038bb752b59c,PodSandboxId:fc6a22e9edf3f6cdb0536fcd2e3dbab8a9e3840d4c8b6a0b3a994b4f3cbb68ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724072804650992022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3db3c56b-3fac-4b1d-bb52-7c842f0421f8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.292303352Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5fdb445a-661c-4876-907a-a5a448bf32a1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.292631808Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b9e8a7f95c456897d3c633e0b25c41b8b48d221aa1df752a5c9a6b125931fb3c,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-5qqtm,Uid:5797c960-9b94-4164-90c5-f0b89084c23a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724072861830463673,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T13:06:43.505906034Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-698949,Uid:abf3663a182e203a855f90d849323b4b,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1724072861526108378,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.221:8443,kubernetes.io/config.hash: abf3663a182e203a855f90d849323b4b,kubernetes.io/config.seen: 2024-08-19T13:06:38.065571738Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a1728486cc7e06e1bae0434a799a23e4a02e6aa4e26f17ae9ed01ddf88fc0671,Metadata:&PodSandboxMetadata{Name:kube-proxy-5xzwk,Uid:5b6aacd5-988c-4114-b95d-768961297985,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724072861490397909,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T13:06:43.347154164Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&PodSandboxMetadata{Name:etcd-pause-698949,Uid:ad5a8791af5b937c21ed3014b6a39af8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724072861475072336,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.221:2379,kubernetes.io/config.hash: ad5a8791af5b937c21ed3014b6a39af8,kubernetes.io/config.seen: 2024-08-19T13:06:38.065567489Z,kubernetes.io/config.source: file,},RuntimeH
andler:,},&PodSandbox{Id:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-698949,Uid:339d7b0c589af675a2204deded697e0a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724072861457390366,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 339d7b0c589af675a2204deded697e0a,kubernetes.io/config.seen: 2024-08-19T13:06:38.065573216Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-698949,Uid:23a78573af93162d451bb71cb6cb100b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724072861437639153,Lab
els:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 23a78573af93162d451bb71cb6cb100b,kubernetes.io/config.seen: 2024-08-19T13:06:38.065573995Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:92819c2c42c090c823a1fea410bcc3c5b5445a318ce6983d41f5668911d29ee9,Metadata:&PodSandboxMetadata{Name:kube-proxy-5xzwk,Uid:5b6aacd5-988c-4114-b95d-768961297985,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724072805161724876,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-08-19T13:06:43.347154164Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fc6a22e9edf3f6cdb0536fcd2e3dbab8a9e3840d4c8b6a0b3a994b4f3cbb68ee,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-5qqtm,Uid:5797c960-9b94-4164-90c5-f0b89084c23a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724072804430088260,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T13:06:43.505906034Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5fdb445a-661c-4876-907a-a5a448bf32a1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.297494095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=311f4591-c84b-445e-984a-c7d1835ecd54 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.297742482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=311f4591-c84b-445e-984a-c7d1835ecd54 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.298591923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52ec1fdc7cc5ff1943cd55eb994800c06499e8af8a1665a26384555118e269ad,PodSandboxId:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724072876550752569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b030d2722cd7fc7cbf3079fb8772452eb3cfc08aba7f23e8307084ef9a18b7e,PodSandboxId:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724072876549752176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eee737a34c7c252b327b0113c8d573d8968bb3cf5ba8c3cc6c791aa46671ff9,PodSandboxId:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724072876513239944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8437ecd909ca7868fd0468b0afdb4bd6239bb36244d3be08b830ca411d6c5e1,PodSandboxId:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724072876509182002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53a3557493dc56c9d6bd1d430a25bc76a3339be0db3a06f9eeebbeb5bf372c9,PodSandboxId:a1728486cc7e06e1bae0434a799a23e4a02e6aa4e26f17ae9ed01ddf88fc0671,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724072862039751651,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7065181391fb90ff9971305427aa3a65d0e3ba939fefd9ef3ebdeded8fe42b7c,PodSandboxId:b9e8a7f95c456897d3c633e0b25c41b8b48d221aa1df752a5c9a6b125931fb3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724072862696629997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29a695a772bf0efa612fd6660095bb298c261c5690e3c703e997accea49e986,PodSandboxId:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724072862044758413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fa86322f82d3f432353651add895d325fee24f5e284a9bee2086cc5ca8fe8e,PodSandboxId:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724072861932188234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4609c6d508aea5ce32d9cf4679a41b03f257e93f73573f33b1e52b679e3a817b,PodSandboxId:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724072861833037609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:632b98832078d30600ae52a6ef430ddff7a359a6d6169b8320a77f760bf5c6d5,PodSandboxId:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724072861843910781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5655595347a7b4aef04e94692999706f5473f4b0d8b7ec275657af57a8d0c4b2,PodSandboxId:92819c2c42c090c823a1fea410bcc3c5b5445a318ce6983d41f5668911d29ee9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724072805285887534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1eb2b7e83657df282c0d42fe8bd7b6ecdb7628e73c2cd3b855038bb752b59c,PodSandboxId:fc6a22e9edf3f6cdb0536fcd2e3dbab8a9e3840d4c8b6a0b3a994b4f3cbb68ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724072804650992022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=311f4591-c84b-445e-984a-c7d1835ecd54 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.304956447Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ece3fd30-3c09-4921-8f43-38ebdaad4b78 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.305181734Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ece3fd30-3c09-4921-8f43-38ebdaad4b78 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.307074926Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30b2ecf0-780f-4e24-9ffb-d26dce7cfa78 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.307781380Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072896307753438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30b2ecf0-780f-4e24-9ffb-d26dce7cfa78 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.308654590Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f521f5df-eec6-4e7b-8fbe-0a2fafcc3d51 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.308982273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f521f5df-eec6-4e7b-8fbe-0a2fafcc3d51 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:16 pause-698949 crio[2119]: time="2024-08-19 13:08:16.309744683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52ec1fdc7cc5ff1943cd55eb994800c06499e8af8a1665a26384555118e269ad,PodSandboxId:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724072876550752569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b030d2722cd7fc7cbf3079fb8772452eb3cfc08aba7f23e8307084ef9a18b7e,PodSandboxId:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724072876549752176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eee737a34c7c252b327b0113c8d573d8968bb3cf5ba8c3cc6c791aa46671ff9,PodSandboxId:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724072876513239944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8437ecd909ca7868fd0468b0afdb4bd6239bb36244d3be08b830ca411d6c5e1,PodSandboxId:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724072876509182002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53a3557493dc56c9d6bd1d430a25bc76a3339be0db3a06f9eeebbeb5bf372c9,PodSandboxId:a1728486cc7e06e1bae0434a799a23e4a02e6aa4e26f17ae9ed01ddf88fc0671,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724072862039751651,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7065181391fb90ff9971305427aa3a65d0e3ba939fefd9ef3ebdeded8fe42b7c,PodSandboxId:b9e8a7f95c456897d3c633e0b25c41b8b48d221aa1df752a5c9a6b125931fb3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724072862696629997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29a695a772bf0efa612fd6660095bb298c261c5690e3c703e997accea49e986,PodSandboxId:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724072862044758413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fa86322f82d3f432353651add895d325fee24f5e284a9bee2086cc5ca8fe8e,PodSandboxId:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724072861932188234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4609c6d508aea5ce32d9cf4679a41b03f257e93f73573f33b1e52b679e3a817b,PodSandboxId:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724072861833037609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:632b98832078d30600ae52a6ef430ddff7a359a6d6169b8320a77f760bf5c6d5,PodSandboxId:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724072861843910781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5655595347a7b4aef04e94692999706f5473f4b0d8b7ec275657af57a8d0c4b2,PodSandboxId:92819c2c42c090c823a1fea410bcc3c5b5445a318ce6983d41f5668911d29ee9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724072805285887534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1eb2b7e83657df282c0d42fe8bd7b6ecdb7628e73c2cd3b855038bb752b59c,PodSandboxId:fc6a22e9edf3f6cdb0536fcd2e3dbab8a9e3840d4c8b6a0b3a994b4f3cbb68ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724072804650992022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f521f5df-eec6-4e7b-8fbe-0a2fafcc3d51 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	52ec1fdc7cc5f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   19 seconds ago       Running             etcd                      2                   3a7e92f426707       etcd-pause-698949
	2b030d2722cd7       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   19 seconds ago       Running             kube-controller-manager   2                   7ef547908afeb       kube-controller-manager-pause-698949
	1eee737a34c7c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   19 seconds ago       Running             kube-scheduler            2                   123519786b997       kube-scheduler-pause-698949
	d8437ecd909ca       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   19 seconds ago       Running             kube-apiserver            2                   4441563408cc2       kube-apiserver-pause-698949
	7065181391fb9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   33 seconds ago       Running             coredns                   1                   b9e8a7f95c456       coredns-6f6b679f8f-5qqtm
	c29a695a772bf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   34 seconds ago       Exited              etcd                      1                   3a7e92f426707       etcd-pause-698949
	d53a3557493dc       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   34 seconds ago       Running             kube-proxy                1                   a1728486cc7e0       kube-proxy-5xzwk
	c2fa86322f82d       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   34 seconds ago       Exited              kube-apiserver            1                   4441563408cc2       kube-apiserver-pause-698949
	632b98832078d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   34 seconds ago       Exited              kube-controller-manager   1                   7ef547908afeb       kube-controller-manager-pause-698949
	4609c6d508aea       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   34 seconds ago       Exited              kube-scheduler            1                   123519786b997       kube-scheduler-pause-698949
	5655595347a7b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   About a minute ago   Exited              kube-proxy                0                   92819c2c42c09       kube-proxy-5xzwk
	0a1eb2b7e8365       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   fc6a22e9edf3f       coredns-6f6b679f8f-5qqtm
	
	
	==> coredns [0a1eb2b7e83657df282c0d42fe8bd7b6ecdb7628e73c2cd3b855038bb752b59c] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1200762963]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 13:06:44.819) (total time: 30003ms):
	Trace[1200762963]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (13:07:14.822)
	Trace[1200762963]: [30.003744548s] [30.003744548s] END
	[INFO] plugin/kubernetes: Trace[613852929]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 13:06:44.819) (total time: 30003ms):
	Trace[613852929]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (13:07:14.822)
	Trace[613852929]: [30.003889183s] [30.003889183s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1386248915]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 13:06:44.821) (total time: 30002ms):
	Trace[1386248915]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (13:07:14.822)
	Trace[1386248915]: [30.002172396s] [30.002172396s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:43131 - 2465 "HINFO IN 4822256440033380775.1684270965545853753. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013097064s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7065181391fb90ff9971305427aa3a65d0e3ba939fefd9ef3ebdeded8fe42b7c] <==
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49027 - 24886 "HINFO IN 2718717822102568054.4782735869707459045. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009289096s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55076->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[914264663]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 13:07:43.131) (total time: 10884ms):
	Trace[914264663]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55076->10.96.0.1:443: read: connection reset by peer 10884ms (13:07:54.016)
	Trace[914264663]: [10.884575119s] [10.884575119s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55076->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55060->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[392737611]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 13:07:43.131) (total time: 10885ms):
	Trace[392737611]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55060->10.96.0.1:443: read: connection reset by peer 10885ms (13:07:54.016)
	Trace[392737611]: [10.885789368s] [10.885789368s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55060->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55088->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1948212645]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 13:07:43.131) (total time: 10887ms):
	Trace[1948212645]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55088->10.96.0.1:443: read: connection reset by peer 10887ms (13:07:54.019)
	Trace[1948212645]: [10.887729515s] [10.887729515s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55088->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-698949
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-698949
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=pause-698949
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T13_06_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 13:06:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-698949
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 13:08:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 13:07:59 +0000   Mon, 19 Aug 2024 13:06:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 13:07:59 +0000   Mon, 19 Aug 2024 13:06:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 13:07:59 +0000   Mon, 19 Aug 2024 13:06:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 13:07:59 +0000   Mon, 19 Aug 2024 13:06:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.221
	  Hostname:    pause-698949
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 9eefdb61dd00488ab1292822794d10e5
	  System UUID:                9eefdb61-dd00-488a-b129-2822794d10e5
	  Boot ID:                    0fd418ca-ba1b-4f09-9d18-cad073bf18f8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-5qqtm                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     93s
	  kube-system                 etcd-pause-698949                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         99s
	  kube-system                 kube-apiserver-pause-698949             250m (12%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-pause-698949    200m (10%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-5xzwk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-pause-698949             100m (5%)     0 (0%)      0 (0%)           0 (0%)         98s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 91s                  kube-proxy       
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)  kubelet          Node pause-698949 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)  kubelet          Node pause-698949 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)  kubelet          Node pause-698949 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    98s                  kubelet          Node pause-698949 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  98s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  98s                  kubelet          Node pause-698949 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     98s                  kubelet          Node pause-698949 status is now: NodeHasSufficientPID
	  Normal  Starting                 98s                  kubelet          Starting kubelet.
	  Normal  NodeReady                97s                  kubelet          Node pause-698949 status is now: NodeReady
	  Normal  RegisteredNode           94s                  node-controller  Node pause-698949 event: Registered Node pause-698949 in Controller
	  Normal  Starting                 20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)    kubelet          Node pause-698949 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)    kubelet          Node pause-698949 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)    kubelet          Node pause-698949 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                  node-controller  Node pause-698949 event: Registered Node pause-698949 in Controller
	
	
	==> dmesg <==
	[  +9.088019] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.064102] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078572] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.221212] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.151516] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.326355] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.405437] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.060030] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.088669] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.063241] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.013380] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.073821] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.302705] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.117095] kauditd_printk_skb: 21 callbacks suppressed
	[Aug19 13:07] kauditd_printk_skb: 67 callbacks suppressed
	[ +14.148963] systemd-fstab-generator[2038]: Ignoring "noauto" option for root device
	[  +0.150255] systemd-fstab-generator[2050]: Ignoring "noauto" option for root device
	[  +0.171012] systemd-fstab-generator[2064]: Ignoring "noauto" option for root device
	[  +0.139210] systemd-fstab-generator[2076]: Ignoring "noauto" option for root device
	[  +0.298919] systemd-fstab-generator[2104]: Ignoring "noauto" option for root device
	[  +1.245187] systemd-fstab-generator[2231]: Ignoring "noauto" option for root device
	[ +12.536729] kauditd_printk_skb: 196 callbacks suppressed
	[  +2.074690] systemd-fstab-generator[2974]: Ignoring "noauto" option for root device
	[  +4.101960] kauditd_printk_skb: 39 callbacks suppressed
	[Aug19 13:08] systemd-fstab-generator[3328]: Ignoring "noauto" option for root device
	
	
	==> etcd [52ec1fdc7cc5ff1943cd55eb994800c06499e8af8a1665a26384555118e269ad] <==
	{"level":"info","ts":"2024-08-19T13:07:57.179975Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c75d0b2482cd9027","local-member-id":"e7b0d5fc33cf92f8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:07:57.180019Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:07:57.178999Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-19T13:07:57.187047Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T13:07:57.190068Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e7b0d5fc33cf92f8","initial-advertise-peer-urls":["https://192.168.39.221:2380"],"listen-peer-urls":["https://192.168.39.221:2380"],"advertise-client-urls":["https://192.168.39.221:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.221:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T13:07:57.190195Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T13:07:57.187073Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.221:2380"}
	{"level":"info","ts":"2024-08-19T13:07:57.190258Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.221:2380"}
	{"level":"info","ts":"2024-08-19T13:07:58.213314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T13:07:58.213436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T13:07:58.213484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 received MsgPreVoteResp from e7b0d5fc33cf92f8 at term 2"}
	{"level":"info","ts":"2024-08-19T13:07:58.213527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T13:07:58.213552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 received MsgVoteResp from e7b0d5fc33cf92f8 at term 3"}
	{"level":"info","ts":"2024-08-19T13:07:58.213578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T13:07:58.213603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e7b0d5fc33cf92f8 elected leader e7b0d5fc33cf92f8 at term 3"}
	{"level":"info","ts":"2024-08-19T13:07:58.218863Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e7b0d5fc33cf92f8","local-member-attributes":"{Name:pause-698949 ClientURLs:[https://192.168.39.221:2379]}","request-path":"/0/members/e7b0d5fc33cf92f8/attributes","cluster-id":"c75d0b2482cd9027","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T13:07:58.219410Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:07:58.221927Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:07:58.222767Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T13:07:58.222819Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T13:07:58.224247Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:07:58.225397Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:07:58.226094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.221:2379"}
	{"level":"info","ts":"2024-08-19T13:07:58.226796Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T13:08:01.491274Z","caller":"traceutil/trace.go:171","msg":"trace[1398600292] transaction","detail":"{read_only:false; number_of_response:0; response_revision:477; }","duration":"127.850014ms","start":"2024-08-19T13:08:01.363409Z","end":"2024-08-19T13:08:01.491259Z","steps":["trace[1398600292] 'process raft request'  (duration: 127.643037ms)"],"step_count":1}
	
	
	==> etcd [c29a695a772bf0efa612fd6660095bb298c261c5690e3c703e997accea49e986] <==
	{"level":"warn","ts":"2024-08-19T13:07:42.801144Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-08-19T13:07:42.802748Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.221:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.39.221:2380","--initial-cluster=pause-698949=https://192.168.39.221:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.221:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.221:2380","--name=pause-698949","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-08-19T13:07:42.802917Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-08-19T13:07:42.804788Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-08-19T13:07:42.804853Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.221:2380"]}
	{"level":"info","ts":"2024-08-19T13:07:42.804914Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T13:07:42.807786Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.221:2379"]}
	{"level":"info","ts":"2024-08-19T13:07:42.817020Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-698949","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.221:2380"],"listen-peer-urls":["https://192.168.39.221:2380"],"advertise-client-urls":["https://192.168.39.221:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.221:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cl
uster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	
	
	==> kernel <==
	 13:08:16 up 2 min,  0 users,  load average: 0.40, 0.21, 0.08
	Linux pause-698949 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c2fa86322f82d3f432353651add895d325fee24f5e284a9bee2086cc5ca8fe8e] <==
	I0819 13:07:42.366074       1 options.go:228] external host was not specified, using 192.168.39.221
	I0819 13:07:42.371900       1 server.go:142] Version: v1.31.0
	I0819 13:07:42.371992       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:07:43.401870       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0819 13:07:43.405888       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:43.406030       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0819 13:07:43.418509       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 13:07:43.443654       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 13:07:43.444790       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 13:07:43.445136       1 instance.go:232] Using reconciler: lease
	W0819 13:07:43.448400       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:44.407053       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:44.407134       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:44.449491       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:45.825072       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:46.048879       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:46.141563       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:48.281141       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:48.422959       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:48.823140       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:51.922778       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:52.737363       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d8437ecd909ca7868fd0468b0afdb4bd6239bb36244d3be08b830ca411d6c5e1] <==
	I0819 13:07:59.538068       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 13:07:59.537962       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 13:07:59.541302       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 13:07:59.541633       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 13:07:59.543430       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 13:07:59.546582       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0819 13:07:59.554841       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0819 13:07:59.555510       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 13:07:59.563333       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 13:07:59.563386       1 policy_source.go:224] refreshing policies
	I0819 13:07:59.598948       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 13:07:59.601365       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 13:07:59.601513       1 aggregator.go:171] initial CRD sync complete...
	I0819 13:07:59.601531       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 13:07:59.601537       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 13:07:59.601542       1 cache.go:39] Caches are synced for autoregister controller
	I0819 13:07:59.617234       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 13:08:00.441460       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 13:08:01.554260       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 13:08:01.605993       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 13:08:01.679189       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 13:08:01.756357       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 13:08:01.772481       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 13:08:02.872273       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 13:08:02.921177       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2b030d2722cd7fc7cbf3079fb8772452eb3cfc08aba7f23e8307084ef9a18b7e] <==
	I0819 13:08:02.818632       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0819 13:08:02.821052       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0819 13:08:02.826183       1 shared_informer.go:320] Caches are synced for node
	I0819 13:08:02.826388       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0819 13:08:02.826545       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0819 13:08:02.826621       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0819 13:08:02.826645       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0819 13:08:02.826800       1 shared_informer.go:320] Caches are synced for daemon sets
	I0819 13:08:02.826936       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-698949"
	I0819 13:08:02.828590       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0819 13:08:02.828854       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="187.678µs"
	I0819 13:08:02.834533       1 shared_informer.go:320] Caches are synced for deployment
	I0819 13:08:02.899077       1 shared_informer.go:320] Caches are synced for taint
	I0819 13:08:02.900046       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0819 13:08:02.900974       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-698949"
	I0819 13:08:02.901035       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0819 13:08:02.902422       1 shared_informer.go:320] Caches are synced for attach detach
	I0819 13:08:02.969989       1 shared_informer.go:320] Caches are synced for disruption
	I0819 13:08:02.978420       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0819 13:08:03.020283       1 shared_informer.go:320] Caches are synced for crt configmap
	I0819 13:08:03.024250       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 13:08:03.031933       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 13:08:03.471042       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 13:08:03.520347       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 13:08:03.520447       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [632b98832078d30600ae52a6ef430ddff7a359a6d6169b8320a77f760bf5c6d5] <==
	I0819 13:07:43.507561       1 serving.go:386] Generated self-signed cert in-memory
	I0819 13:07:44.034386       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 13:07:44.035774       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:07:44.037560       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 13:07:44.040194       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 13:07:44.040359       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 13:07:44.040441       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [5655595347a7b4aef04e94692999706f5473f4b0d8b7ec275657af57a8d0c4b2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 13:06:45.465556       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 13:06:45.474618       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.221"]
	E0819 13:06:45.474830       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 13:06:45.510023       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 13:06:45.510066       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 13:06:45.510092       1 server_linux.go:169] "Using iptables Proxier"
	I0819 13:06:45.512602       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 13:06:45.512971       1 server.go:483] "Version info" version="v1.31.0"
	I0819 13:06:45.512995       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:06:45.514259       1 config.go:197] "Starting service config controller"
	I0819 13:06:45.514301       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 13:06:45.514322       1 config.go:104] "Starting endpoint slice config controller"
	I0819 13:06:45.514342       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 13:06:45.514920       1 config.go:326] "Starting node config controller"
	I0819 13:06:45.515085       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 13:06:45.614418       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 13:06:45.614561       1 shared_informer.go:320] Caches are synced for service config
	I0819 13:06:45.615236       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d53a3557493dc56c9d6bd1d430a25bc76a3339be0db3a06f9eeebbeb5bf372c9] <==
	 >
	E0819 13:07:43.657130       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 13:07:54.018503       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-698949\": dial tcp 192.168.39.221:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.221:54700->192.168.39.221:8443: read: connection reset by peer"
	E0819 13:07:55.073337       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-698949\": dial tcp 192.168.39.221:8443: connect: connection refused"
	I0819 13:07:59.548199       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.221"]
	E0819 13:07:59.548479       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 13:07:59.619150       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 13:07:59.619232       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 13:07:59.619263       1 server_linux.go:169] "Using iptables Proxier"
	I0819 13:07:59.622416       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 13:07:59.622763       1 server.go:483] "Version info" version="v1.31.0"
	I0819 13:07:59.623397       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:07:59.625496       1 config.go:197] "Starting service config controller"
	I0819 13:07:59.625961       1 config.go:104] "Starting endpoint slice config controller"
	I0819 13:07:59.625995       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 13:07:59.626350       1 config.go:326] "Starting node config controller"
	I0819 13:07:59.626379       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 13:07:59.626641       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 13:07:59.727306       1 shared_informer.go:320] Caches are synced for node config
	I0819 13:07:59.727324       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 13:07:59.728613       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [1eee737a34c7c252b327b0113c8d573d8968bb3cf5ba8c3cc6c791aa46671ff9] <==
	I0819 13:07:57.695132       1 serving.go:386] Generated self-signed cert in-memory
	W0819 13:07:59.508900       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 13:07:59.508998       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 13:07:59.509009       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 13:07:59.509015       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 13:07:59.559105       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 13:07:59.559239       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:07:59.565066       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 13:07:59.565188       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 13:07:59.565224       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 13:07:59.565250       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 13:07:59.666178       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [4609c6d508aea5ce32d9cf4679a41b03f257e93f73573f33b1e52b679e3a817b] <==
	I0819 13:07:43.687146       1 serving.go:386] Generated self-signed cert in-memory
	W0819 13:07:54.016773       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.39.221:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.221:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.221:54722->192.168.39.221:8443: read: connection reset by peer
	W0819 13:07:54.016920       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 13:07:54.016952       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 13:07:54.030557       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 13:07:54.030620       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0819 13:07:54.030655       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0819 13:07:54.033408       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 13:07:54.033469       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 13:07:54.033510       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0819 13:07:54.034005       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0819 13:07:54.034070       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0819 13:07:54.034116       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.271043    2981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/339d7b0c589af675a2204deded697e0a-k8s-certs\") pod \"kube-controller-manager-pause-698949\" (UID: \"339d7b0c589af675a2204deded697e0a\") " pod="kube-system/kube-controller-manager-pause-698949"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.278411    2981 kubelet_node_status.go:72] "Attempting to register node" node="pause-698949"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: E0819 13:07:56.279231    2981 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.221:8443: connect: connection refused" node="pause-698949"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.481355    2981 kubelet_node_status.go:72] "Attempting to register node" node="pause-698949"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: E0819 13:07:56.482255    2981 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.221:8443: connect: connection refused" node="pause-698949"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.486578    2981 scope.go:117] "RemoveContainer" containerID="c2fa86322f82d3f432353651add895d325fee24f5e284a9bee2086cc5ca8fe8e"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.488514    2981 scope.go:117] "RemoveContainer" containerID="4609c6d508aea5ce32d9cf4679a41b03f257e93f73573f33b1e52b679e3a817b"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.492142    2981 scope.go:117] "RemoveContainer" containerID="632b98832078d30600ae52a6ef430ddff7a359a6d6169b8320a77f760bf5c6d5"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.496887    2981 scope.go:117] "RemoveContainer" containerID="c29a695a772bf0efa612fd6660095bb298c261c5690e3c703e997accea49e986"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: E0819 13:07:56.661285    2981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-698949?timeout=10s\": dial tcp 192.168.39.221:8443: connect: connection refused" interval="800ms"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.884588    2981 kubelet_node_status.go:72] "Attempting to register node" node="pause-698949"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: E0819 13:07:56.885914    2981 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.221:8443: connect: connection refused" node="pause-698949"
	Aug 19 13:07:57 pause-698949 kubelet[2981]: I0819 13:07:57.687231    2981 kubelet_node_status.go:72] "Attempting to register node" node="pause-698949"
	Aug 19 13:07:59 pause-698949 kubelet[2981]: I0819 13:07:59.589226    2981 kubelet_node_status.go:111] "Node was previously registered" node="pause-698949"
	Aug 19 13:07:59 pause-698949 kubelet[2981]: I0819 13:07:59.589768    2981 kubelet_node_status.go:75] "Successfully registered node" node="pause-698949"
	Aug 19 13:07:59 pause-698949 kubelet[2981]: I0819 13:07:59.589941    2981 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 19 13:07:59 pause-698949 kubelet[2981]: I0819 13:07:59.591759    2981 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 19 13:08:00 pause-698949 kubelet[2981]: I0819 13:08:00.014648    2981 apiserver.go:52] "Watching apiserver"
	Aug 19 13:08:00 pause-698949 kubelet[2981]: I0819 13:08:00.053804    2981 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 19 13:08:00 pause-698949 kubelet[2981]: I0819 13:08:00.116373    2981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b6aacd5-988c-4114-b95d-768961297985-lib-modules\") pod \"kube-proxy-5xzwk\" (UID: \"5b6aacd5-988c-4114-b95d-768961297985\") " pod="kube-system/kube-proxy-5xzwk"
	Aug 19 13:08:00 pause-698949 kubelet[2981]: I0819 13:08:00.117044    2981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b6aacd5-988c-4114-b95d-768961297985-xtables-lock\") pod \"kube-proxy-5xzwk\" (UID: \"5b6aacd5-988c-4114-b95d-768961297985\") " pod="kube-system/kube-proxy-5xzwk"
	Aug 19 13:08:06 pause-698949 kubelet[2981]: E0819 13:08:06.183770    2981 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072886183242433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:08:06 pause-698949 kubelet[2981]: E0819 13:08:06.183793    2981 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072886183242433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:08:16 pause-698949 kubelet[2981]: E0819 13:08:16.185986    2981 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072896185174740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:08:16 pause-698949 kubelet[2981]: E0819 13:08:16.186010    2981 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072896185174740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-698949 -n pause-698949
helpers_test.go:261: (dbg) Run:  kubectl --context pause-698949 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-698949 -n pause-698949
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-698949 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-698949 logs -n 25: (1.453540816s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | cat docker --no-pager                                |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo cat                              | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo docker                           | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | status cri-docker --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | cat cri-docker --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo cat                              | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo cat                              | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo                                  | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | status containerd --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | cat containerd --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo cat                              | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo cat                              | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo containerd                       | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | config dump                                          |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | status crio --all --full                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo systemctl                        | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | cat crio --no-pager                                  |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo find                             | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p auto-649777 sudo crio                             | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p auto-649777                                       | auto-649777    | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	| start   | -p calico-649777 --memory=3072                       | calico-649777  | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo cat                           | kindnet-649777 | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /etc/nsswitch.conf                                   |                |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo cat                           | kindnet-649777 | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /etc/hosts                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo cat                           | kindnet-649777 | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | /etc/resolv.conf                                     |                |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo crictl                        | kindnet-649777 | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | pods                                                 |                |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo crictl                        | kindnet-649777 | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC | 19 Aug 24 13:08 UTC |
	|         | ps --all                                             |                |         |         |                     |                     |
	| ssh     | -p kindnet-649777 sudo find                          | kindnet-649777 | jenkins | v1.33.1 | 19 Aug 24 13:08 UTC |                     |
	|         | /etc/cni -type f -exec sh -c                         |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 13:08:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 13:08:10.111591 2138610 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:08:10.111720 2138610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:08:10.111729 2138610 out.go:358] Setting ErrFile to fd 2...
	I0819 13:08:10.111733 2138610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:08:10.111976 2138610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:08:10.112633 2138610 out.go:352] Setting JSON to false
	I0819 13:08:10.114315 2138610 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":165040,"bootTime":1723907850,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:08:10.114394 2138610 start.go:139] virtualization: kvm guest
	I0819 13:08:10.116558 2138610 out.go:177] * [calico-649777] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:08:10.117800 2138610 notify.go:220] Checking for updates...
	I0819 13:08:10.117813 2138610 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:08:10.118907 2138610 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:08:10.120093 2138610 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:08:10.121304 2138610 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:08:10.122425 2138610 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:08:10.123469 2138610 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:08:10.125067 2138610 config.go:182] Loaded profile config "kindnet-649777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:08:10.125185 2138610 config.go:182] Loaded profile config "kubernetes-upgrade-227337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:08:10.125313 2138610 config.go:182] Loaded profile config "pause-698949": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:08:10.125411 2138610 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:08:10.167123 2138610 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 13:08:10.168268 2138610 start.go:297] selected driver: kvm2
	I0819 13:08:10.168291 2138610 start.go:901] validating driver "kvm2" against <nil>
	I0819 13:08:10.168307 2138610 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:08:10.169308 2138610 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:08:10.169414 2138610 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:08:10.187677 2138610 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:08:10.187726 2138610 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 13:08:10.187981 2138610 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:08:10.188047 2138610 cni.go:84] Creating CNI manager for "calico"
	I0819 13:08:10.188059 2138610 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0819 13:08:10.188116 2138610 start.go:340] cluster config:
	{Name:calico-649777 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-649777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:08:10.188238 2138610 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:08:10.189872 2138610 out.go:177] * Starting "calico-649777" primary control-plane node in "calico-649777" cluster
	I0819 13:08:10.190921 2138610 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:08:10.190974 2138610 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:08:10.190990 2138610 cache.go:56] Caching tarball of preloaded images
	I0819 13:08:10.191113 2138610 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:08:10.191130 2138610 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 13:08:10.191267 2138610 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/config.json ...
	I0819 13:08:10.191298 2138610 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/config.json: {Name:mk32a9fc0e0828ab4a5e7200d7fdd3233b3538d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:08:10.191514 2138610 start.go:360] acquireMachinesLock for calico-649777: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:08:10.191575 2138610 start.go:364] duration metric: took 38.723µs to acquireMachinesLock for "calico-649777"
	I0819 13:08:10.191611 2138610 start.go:93] Provisioning new machine with config: &{Name:calico-649777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:calico-649777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:08:10.191701 2138610 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 13:08:10.442830 2137182 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.634800535s)
	I0819 13:08:10.442868 2137182 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:08:10.442938 2137182 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:08:10.448130 2137182 start.go:563] Will wait 60s for crictl version
	I0819 13:08:10.448208 2137182 ssh_runner.go:195] Run: which crictl
	I0819 13:08:10.452209 2137182 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:08:10.496752 2137182 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:08:10.496852 2137182 ssh_runner.go:195] Run: crio --version
	I0819 13:08:10.528228 2137182 ssh_runner.go:195] Run: crio --version
	I0819 13:08:10.564672 2137182 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:08:10.565909 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) Calling .GetIP
	I0819 13:08:10.569220 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:08:10.569656 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:7e:a0", ip: ""} in network mk-kubernetes-upgrade-227337: {Iface:virbr2 ExpiryTime:2024-08-19 14:07:24 +0000 UTC Type:0 Mac:52:54:00:3f:7e:a0 Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:kubernetes-upgrade-227337 Clientid:01:52:54:00:3f:7e:a0}
	I0819 13:08:10.569693 2137182 main.go:141] libmachine: (kubernetes-upgrade-227337) DBG | domain kubernetes-upgrade-227337 has defined IP address 192.168.50.233 and MAC address 52:54:00:3f:7e:a0 in network mk-kubernetes-upgrade-227337
	I0819 13:08:10.569881 2137182 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 13:08:10.574545 2137182 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-227337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-227337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.233 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:08:10.574681 2137182 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:08:10.574756 2137182 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:08:10.620791 2137182 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:08:10.620818 2137182 crio.go:433] Images already preloaded, skipping extraction
	I0819 13:08:10.620879 2137182 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:08:10.672598 2137182 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:08:10.672631 2137182 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:08:10.672642 2137182 kubeadm.go:934] updating node { 192.168.50.233 8443 v1.31.0 crio true true} ...
	I0819 13:08:10.672793 2137182 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-227337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-227337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:08:10.672884 2137182 ssh_runner.go:195] Run: crio config
	I0819 13:08:10.724535 2137182 cni.go:84] Creating CNI manager for ""
	I0819 13:08:10.724559 2137182 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:08:10.724573 2137182 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:08:10.724601 2137182 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.233 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-227337 NodeName:kubernetes-upgrade-227337 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:08:10.724795 2137182 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.233
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-227337"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:08:10.724863 2137182 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:08:10.735564 2137182 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:08:10.735636 2137182 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:08:10.746552 2137182 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0819 13:08:10.765518 2137182 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:08:10.784833 2137182 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 13:08:10.803538 2137182 ssh_runner.go:195] Run: grep 192.168.50.233	control-plane.minikube.internal$ /etc/hosts
	I0819 13:08:10.807687 2137182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:08:08.852395 2136755 pod_ready.go:103] pod "etcd-pause-698949" in "kube-system" namespace has status "Ready":"False"
	I0819 13:08:10.851421 2136755 pod_ready.go:93] pod "etcd-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:10.851455 2136755 pod_ready.go:82] duration metric: took 9.006607691s for pod "etcd-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:10.851469 2136755 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.874034 2136755 pod_ready.go:93] pod "kube-apiserver-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:11.874067 2136755 pod_ready.go:82] duration metric: took 1.022588858s for pod "kube-apiserver-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.874083 2136755 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.882599 2136755 pod_ready.go:93] pod "kube-controller-manager-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:11.882633 2136755 pod_ready.go:82] duration metric: took 8.538806ms for pod "kube-controller-manager-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.882650 2136755 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5xzwk" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.890728 2136755 pod_ready.go:93] pod "kube-proxy-5xzwk" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:11.890809 2136755 pod_ready.go:82] duration metric: took 8.146995ms for pod "kube-proxy-5xzwk" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.890837 2136755 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.897696 2136755 pod_ready.go:93] pod "kube-scheduler-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:11.897718 2136755 pod_ready.go:82] duration metric: took 6.863451ms for pod "kube-scheduler-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:11.897728 2136755 pod_ready.go:39] duration metric: took 10.070257878s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:08:11.897753 2136755 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:08:11.915657 2136755 ops.go:34] apiserver oom_adj: -16
	I0819 13:08:11.915681 2136755 kubeadm.go:597] duration metric: took 29.315460715s to restartPrimaryControlPlane
	I0819 13:08:11.915692 2136755 kubeadm.go:394] duration metric: took 29.531838577s to StartCluster
	I0819 13:08:11.915719 2136755 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:08:11.915811 2136755 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:08:11.917560 2136755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:08:11.917857 2136755 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:08:11.917907 2136755 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:08:11.918097 2136755 config.go:182] Loaded profile config "pause-698949": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:08:11.919310 2136755 out.go:177] * Verifying Kubernetes components...
	I0819 13:08:11.920088 2136755 out.go:177] * Enabled addons: 
	I0819 13:08:10.193650 2138610 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 13:08:10.193794 2138610 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:08:10.193841 2138610 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:08:10.210946 2138610 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37423
	I0819 13:08:10.211393 2138610 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:08:10.212026 2138610 main.go:141] libmachine: Using API Version  1
	I0819 13:08:10.212049 2138610 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:08:10.212480 2138610 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:08:10.212690 2138610 main.go:141] libmachine: (calico-649777) Calling .GetMachineName
	I0819 13:08:10.212849 2138610 main.go:141] libmachine: (calico-649777) Calling .DriverName
	I0819 13:08:10.212997 2138610 start.go:159] libmachine.API.Create for "calico-649777" (driver="kvm2")
	I0819 13:08:10.213028 2138610 client.go:168] LocalClient.Create starting
	I0819 13:08:10.213062 2138610 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem
	I0819 13:08:10.213109 2138610 main.go:141] libmachine: Decoding PEM data...
	I0819 13:08:10.213133 2138610 main.go:141] libmachine: Parsing certificate...
	I0819 13:08:10.213206 2138610 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem
	I0819 13:08:10.213235 2138610 main.go:141] libmachine: Decoding PEM data...
	I0819 13:08:10.213256 2138610 main.go:141] libmachine: Parsing certificate...
	I0819 13:08:10.213288 2138610 main.go:141] libmachine: Running pre-create checks...
	I0819 13:08:10.213298 2138610 main.go:141] libmachine: (calico-649777) Calling .PreCreateCheck
	I0819 13:08:10.213647 2138610 main.go:141] libmachine: (calico-649777) Calling .GetConfigRaw
	I0819 13:08:10.214070 2138610 main.go:141] libmachine: Creating machine...
	I0819 13:08:10.214088 2138610 main.go:141] libmachine: (calico-649777) Calling .Create
	I0819 13:08:10.214233 2138610 main.go:141] libmachine: (calico-649777) Creating KVM machine...
	I0819 13:08:10.215431 2138610 main.go:141] libmachine: (calico-649777) DBG | found existing default KVM network
	I0819 13:08:10.216858 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:10.216674 2138633 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fd:3a:57} reservation:<nil>}
	I0819 13:08:10.217633 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:10.217550 2138633 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ab:44:aa} reservation:<nil>}
	I0819 13:08:10.218770 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:10.218661 2138633 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002bcf40}
	I0819 13:08:10.218802 2138610 main.go:141] libmachine: (calico-649777) DBG | created network xml: 
	I0819 13:08:10.218816 2138610 main.go:141] libmachine: (calico-649777) DBG | <network>
	I0819 13:08:10.218832 2138610 main.go:141] libmachine: (calico-649777) DBG |   <name>mk-calico-649777</name>
	I0819 13:08:10.218841 2138610 main.go:141] libmachine: (calico-649777) DBG |   <dns enable='no'/>
	I0819 13:08:10.218864 2138610 main.go:141] libmachine: (calico-649777) DBG |   
	I0819 13:08:10.218878 2138610 main.go:141] libmachine: (calico-649777) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0819 13:08:10.218888 2138610 main.go:141] libmachine: (calico-649777) DBG |     <dhcp>
	I0819 13:08:10.218904 2138610 main.go:141] libmachine: (calico-649777) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0819 13:08:10.218926 2138610 main.go:141] libmachine: (calico-649777) DBG |     </dhcp>
	I0819 13:08:10.218937 2138610 main.go:141] libmachine: (calico-649777) DBG |   </ip>
	I0819 13:08:10.218943 2138610 main.go:141] libmachine: (calico-649777) DBG |   
	I0819 13:08:10.218953 2138610 main.go:141] libmachine: (calico-649777) DBG | </network>
	I0819 13:08:10.218959 2138610 main.go:141] libmachine: (calico-649777) DBG | 
	I0819 13:08:10.223795 2138610 main.go:141] libmachine: (calico-649777) DBG | trying to create private KVM network mk-calico-649777 192.168.61.0/24...
	I0819 13:08:10.302709 2138610 main.go:141] libmachine: (calico-649777) DBG | private KVM network mk-calico-649777 192.168.61.0/24 created
	I0819 13:08:10.302863 2138610 main.go:141] libmachine: (calico-649777) Setting up store path in /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777 ...
	I0819 13:08:10.302889 2138610 main.go:141] libmachine: (calico-649777) Building disk image from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 13:08:10.302943 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:10.302810 2138633 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:08:10.302972 2138610 main.go:141] libmachine: (calico-649777) Downloading /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 13:08:10.613581 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:10.613440 2138633 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777/id_rsa...
	I0819 13:08:10.704703 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:10.704539 2138633 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777/calico-649777.rawdisk...
	I0819 13:08:10.705360 2138610 main.go:141] libmachine: (calico-649777) DBG | Writing magic tar header
	I0819 13:08:10.705383 2138610 main.go:141] libmachine: (calico-649777) DBG | Writing SSH key tar header
	I0819 13:08:10.705990 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:10.705906 2138633 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777 ...
	I0819 13:08:10.706083 2138610 main.go:141] libmachine: (calico-649777) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777
	I0819 13:08:10.706110 2138610 main.go:141] libmachine: (calico-649777) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines
	I0819 13:08:10.706127 2138610 main.go:141] libmachine: (calico-649777) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777 (perms=drwx------)
	I0819 13:08:10.706142 2138610 main.go:141] libmachine: (calico-649777) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:08:10.706157 2138610 main.go:141] libmachine: (calico-649777) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines (perms=drwxr-xr-x)
	I0819 13:08:10.706173 2138610 main.go:141] libmachine: (calico-649777) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube (perms=drwxr-xr-x)
	I0819 13:08:10.706209 2138610 main.go:141] libmachine: (calico-649777) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935 (perms=drwxrwxr-x)
	I0819 13:08:10.706224 2138610 main.go:141] libmachine: (calico-649777) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935
	I0819 13:08:10.706242 2138610 main.go:141] libmachine: (calico-649777) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 13:08:10.706256 2138610 main.go:141] libmachine: (calico-649777) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 13:08:10.706271 2138610 main.go:141] libmachine: (calico-649777) DBG | Checking permissions on dir: /home/jenkins
	I0819 13:08:10.706285 2138610 main.go:141] libmachine: (calico-649777) DBG | Checking permissions on dir: /home
	I0819 13:08:10.706296 2138610 main.go:141] libmachine: (calico-649777) DBG | Skipping /home - not owner
	I0819 13:08:10.706313 2138610 main.go:141] libmachine: (calico-649777) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 13:08:10.706326 2138610 main.go:141] libmachine: (calico-649777) Creating domain...
	I0819 13:08:10.707271 2138610 main.go:141] libmachine: (calico-649777) define libvirt domain using xml: 
	I0819 13:08:10.707292 2138610 main.go:141] libmachine: (calico-649777) <domain type='kvm'>
	I0819 13:08:10.707301 2138610 main.go:141] libmachine: (calico-649777)   <name>calico-649777</name>
	I0819 13:08:10.707309 2138610 main.go:141] libmachine: (calico-649777)   <memory unit='MiB'>3072</memory>
	I0819 13:08:10.707318 2138610 main.go:141] libmachine: (calico-649777)   <vcpu>2</vcpu>
	I0819 13:08:10.707331 2138610 main.go:141] libmachine: (calico-649777)   <features>
	I0819 13:08:10.707343 2138610 main.go:141] libmachine: (calico-649777)     <acpi/>
	I0819 13:08:10.707365 2138610 main.go:141] libmachine: (calico-649777)     <apic/>
	I0819 13:08:10.707378 2138610 main.go:141] libmachine: (calico-649777)     <pae/>
	I0819 13:08:10.707384 2138610 main.go:141] libmachine: (calico-649777)     
	I0819 13:08:10.707396 2138610 main.go:141] libmachine: (calico-649777)   </features>
	I0819 13:08:10.707406 2138610 main.go:141] libmachine: (calico-649777)   <cpu mode='host-passthrough'>
	I0819 13:08:10.707413 2138610 main.go:141] libmachine: (calico-649777)   
	I0819 13:08:10.707423 2138610 main.go:141] libmachine: (calico-649777)   </cpu>
	I0819 13:08:10.707431 2138610 main.go:141] libmachine: (calico-649777)   <os>
	I0819 13:08:10.707441 2138610 main.go:141] libmachine: (calico-649777)     <type>hvm</type>
	I0819 13:08:10.707451 2138610 main.go:141] libmachine: (calico-649777)     <boot dev='cdrom'/>
	I0819 13:08:10.707460 2138610 main.go:141] libmachine: (calico-649777)     <boot dev='hd'/>
	I0819 13:08:10.707469 2138610 main.go:141] libmachine: (calico-649777)     <bootmenu enable='no'/>
	I0819 13:08:10.707479 2138610 main.go:141] libmachine: (calico-649777)   </os>
	I0819 13:08:10.707491 2138610 main.go:141] libmachine: (calico-649777)   <devices>
	I0819 13:08:10.707502 2138610 main.go:141] libmachine: (calico-649777)     <disk type='file' device='cdrom'>
	I0819 13:08:10.707520 2138610 main.go:141] libmachine: (calico-649777)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777/boot2docker.iso'/>
	I0819 13:08:10.707536 2138610 main.go:141] libmachine: (calico-649777)       <target dev='hdc' bus='scsi'/>
	I0819 13:08:10.707547 2138610 main.go:141] libmachine: (calico-649777)       <readonly/>
	I0819 13:08:10.707557 2138610 main.go:141] libmachine: (calico-649777)     </disk>
	I0819 13:08:10.707566 2138610 main.go:141] libmachine: (calico-649777)     <disk type='file' device='disk'>
	I0819 13:08:10.707586 2138610 main.go:141] libmachine: (calico-649777)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 13:08:10.707602 2138610 main.go:141] libmachine: (calico-649777)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/calico-649777/calico-649777.rawdisk'/>
	I0819 13:08:10.707617 2138610 main.go:141] libmachine: (calico-649777)       <target dev='hda' bus='virtio'/>
	I0819 13:08:10.707629 2138610 main.go:141] libmachine: (calico-649777)     </disk>
	I0819 13:08:10.707638 2138610 main.go:141] libmachine: (calico-649777)     <interface type='network'>
	I0819 13:08:10.707650 2138610 main.go:141] libmachine: (calico-649777)       <source network='mk-calico-649777'/>
	I0819 13:08:10.707668 2138610 main.go:141] libmachine: (calico-649777)       <model type='virtio'/>
	I0819 13:08:10.707678 2138610 main.go:141] libmachine: (calico-649777)     </interface>
	I0819 13:08:10.707685 2138610 main.go:141] libmachine: (calico-649777)     <interface type='network'>
	I0819 13:08:10.707697 2138610 main.go:141] libmachine: (calico-649777)       <source network='default'/>
	I0819 13:08:10.707706 2138610 main.go:141] libmachine: (calico-649777)       <model type='virtio'/>
	I0819 13:08:10.707717 2138610 main.go:141] libmachine: (calico-649777)     </interface>
	I0819 13:08:10.707728 2138610 main.go:141] libmachine: (calico-649777)     <serial type='pty'>
	I0819 13:08:10.707736 2138610 main.go:141] libmachine: (calico-649777)       <target port='0'/>
	I0819 13:08:10.707746 2138610 main.go:141] libmachine: (calico-649777)     </serial>
	I0819 13:08:10.707755 2138610 main.go:141] libmachine: (calico-649777)     <console type='pty'>
	I0819 13:08:10.707766 2138610 main.go:141] libmachine: (calico-649777)       <target type='serial' port='0'/>
	I0819 13:08:10.707774 2138610 main.go:141] libmachine: (calico-649777)     </console>
	I0819 13:08:10.707795 2138610 main.go:141] libmachine: (calico-649777)     <rng model='virtio'>
	I0819 13:08:10.707806 2138610 main.go:141] libmachine: (calico-649777)       <backend model='random'>/dev/random</backend>
	I0819 13:08:10.707815 2138610 main.go:141] libmachine: (calico-649777)     </rng>
	I0819 13:08:10.707823 2138610 main.go:141] libmachine: (calico-649777)     
	I0819 13:08:10.707830 2138610 main.go:141] libmachine: (calico-649777)     
	I0819 13:08:10.707838 2138610 main.go:141] libmachine: (calico-649777)   </devices>
	I0819 13:08:10.707848 2138610 main.go:141] libmachine: (calico-649777) </domain>
	I0819 13:08:10.707857 2138610 main.go:141] libmachine: (calico-649777) 
	I0819 13:08:10.713033 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:8a:a4:66 in network default
	I0819 13:08:10.713733 2138610 main.go:141] libmachine: (calico-649777) Ensuring networks are active...
	I0819 13:08:10.713762 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:10.714545 2138610 main.go:141] libmachine: (calico-649777) Ensuring network default is active
	I0819 13:08:10.714851 2138610 main.go:141] libmachine: (calico-649777) Ensuring network mk-calico-649777 is active
	I0819 13:08:10.715397 2138610 main.go:141] libmachine: (calico-649777) Getting domain xml...
	I0819 13:08:10.716315 2138610 main.go:141] libmachine: (calico-649777) Creating domain...
	I0819 13:08:12.097802 2138610 main.go:141] libmachine: (calico-649777) Waiting to get IP...
	I0819 13:08:12.098806 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:12.099358 2138610 main.go:141] libmachine: (calico-649777) DBG | unable to find current IP address of domain calico-649777 in network mk-calico-649777
	I0819 13:08:12.099389 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:12.099330 2138633 retry.go:31] will retry after 274.81844ms: waiting for machine to come up
	I0819 13:08:12.375737 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:12.376449 2138610 main.go:141] libmachine: (calico-649777) DBG | unable to find current IP address of domain calico-649777 in network mk-calico-649777
	I0819 13:08:12.376484 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:12.376394 2138633 retry.go:31] will retry after 375.15178ms: waiting for machine to come up
	I0819 13:08:12.753094 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:12.753672 2138610 main.go:141] libmachine: (calico-649777) DBG | unable to find current IP address of domain calico-649777 in network mk-calico-649777
	I0819 13:08:12.753715 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:12.753573 2138633 retry.go:31] will retry after 379.742892ms: waiting for machine to come up
	I0819 13:08:13.135380 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:13.136096 2138610 main.go:141] libmachine: (calico-649777) DBG | unable to find current IP address of domain calico-649777 in network mk-calico-649777
	I0819 13:08:13.136131 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:13.136026 2138633 retry.go:31] will retry after 474.902187ms: waiting for machine to come up
	I0819 13:08:13.613049 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:13.613588 2138610 main.go:141] libmachine: (calico-649777) DBG | unable to find current IP address of domain calico-649777 in network mk-calico-649777
	I0819 13:08:13.613625 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:13.613558 2138633 retry.go:31] will retry after 484.923411ms: waiting for machine to come up
	I0819 13:08:14.100379 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:14.100891 2138610 main.go:141] libmachine: (calico-649777) DBG | unable to find current IP address of domain calico-649777 in network mk-calico-649777
	I0819 13:08:14.100926 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:14.100851 2138633 retry.go:31] will retry after 794.279866ms: waiting for machine to come up
	I0819 13:08:14.897096 2138610 main.go:141] libmachine: (calico-649777) DBG | domain calico-649777 has defined MAC address 52:54:00:7b:b5:8d in network mk-calico-649777
	I0819 13:08:14.897656 2138610 main.go:141] libmachine: (calico-649777) DBG | unable to find current IP address of domain calico-649777 in network mk-calico-649777
	I0819 13:08:14.897687 2138610 main.go:141] libmachine: (calico-649777) DBG | I0819 13:08:14.897608 2138633 retry.go:31] will retry after 798.828197ms: waiting for machine to come up
	I0819 13:08:11.920712 2136755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:08:11.921239 2136755 addons.go:510] duration metric: took 3.33491ms for enable addons: enabled=[]
	I0819 13:08:12.103673 2136755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:08:12.124223 2136755 node_ready.go:35] waiting up to 6m0s for node "pause-698949" to be "Ready" ...
	I0819 13:08:12.128196 2136755 node_ready.go:49] node "pause-698949" has status "Ready":"True"
	I0819 13:08:12.128218 2136755 node_ready.go:38] duration metric: took 3.947723ms for node "pause-698949" to be "Ready" ...
	I0819 13:08:12.128231 2136755 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:08:12.133920 2136755 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5qqtm" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:12.453720 2136755 pod_ready.go:93] pod "coredns-6f6b679f8f-5qqtm" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:12.453745 2136755 pod_ready.go:82] duration metric: took 319.795906ms for pod "coredns-6f6b679f8f-5qqtm" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:12.453756 2136755 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:12.850313 2136755 pod_ready.go:93] pod "etcd-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:12.850342 2136755 pod_ready.go:82] duration metric: took 396.578671ms for pod "etcd-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:12.850355 2136755 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:13.248787 2136755 pod_ready.go:93] pod "kube-apiserver-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:13.248827 2136755 pod_ready.go:82] duration metric: took 398.453288ms for pod "kube-apiserver-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:13.248851 2136755 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:13.649613 2136755 pod_ready.go:93] pod "kube-controller-manager-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:13.649641 2136755 pod_ready.go:82] duration metric: took 400.782186ms for pod "kube-controller-manager-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:13.649652 2136755 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5xzwk" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:14.049276 2136755 pod_ready.go:93] pod "kube-proxy-5xzwk" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:14.049305 2136755 pod_ready.go:82] duration metric: took 399.645666ms for pod "kube-proxy-5xzwk" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:14.049314 2136755 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:14.448722 2136755 pod_ready.go:93] pod "kube-scheduler-pause-698949" in "kube-system" namespace has status "Ready":"True"
	I0819 13:08:14.448747 2136755 pod_ready.go:82] duration metric: took 399.426684ms for pod "kube-scheduler-pause-698949" in "kube-system" namespace to be "Ready" ...
	I0819 13:08:14.448756 2136755 pod_ready.go:39] duration metric: took 2.320511819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:08:14.448770 2136755 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:08:14.448831 2136755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:08:14.463230 2136755 api_server.go:72] duration metric: took 2.545327543s to wait for apiserver process to appear ...
	I0819 13:08:14.463255 2136755 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:08:14.463272 2136755 api_server.go:253] Checking apiserver healthz at https://192.168.39.221:8443/healthz ...
	I0819 13:08:14.468845 2136755 api_server.go:279] https://192.168.39.221:8443/healthz returned 200:
	ok
	I0819 13:08:14.469872 2136755 api_server.go:141] control plane version: v1.31.0
	I0819 13:08:14.469898 2136755 api_server.go:131] duration metric: took 6.635694ms to wait for apiserver health ...
	I0819 13:08:14.469908 2136755 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:08:14.652536 2136755 system_pods.go:59] 6 kube-system pods found
	I0819 13:08:14.652570 2136755 system_pods.go:61] "coredns-6f6b679f8f-5qqtm" [5797c960-9b94-4164-90c5-f0b89084c23a] Running
	I0819 13:08:14.652578 2136755 system_pods.go:61] "etcd-pause-698949" [42e6e52d-9c13-4152-9fd7-b745b28078d9] Running
	I0819 13:08:14.652583 2136755 system_pods.go:61] "kube-apiserver-pause-698949" [8f1c7b7b-3cf2-461a-87b3-5c4a2b865f21] Running
	I0819 13:08:14.652588 2136755 system_pods.go:61] "kube-controller-manager-pause-698949" [e997a844-c93f-43bc-9c58-f6782c0df3a2] Running
	I0819 13:08:14.652633 2136755 system_pods.go:61] "kube-proxy-5xzwk" [5b6aacd5-988c-4114-b95d-768961297985] Running
	I0819 13:08:14.652644 2136755 system_pods.go:61] "kube-scheduler-pause-698949" [37bc080f-8d60-45ca-b3e1-05ce1acd85d6] Running
	I0819 13:08:14.652659 2136755 system_pods.go:74] duration metric: took 182.736839ms to wait for pod list to return data ...
	I0819 13:08:14.652672 2136755 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:08:14.849462 2136755 default_sa.go:45] found service account: "default"
	I0819 13:08:14.849498 2136755 default_sa.go:55] duration metric: took 196.813519ms for default service account to be created ...
	I0819 13:08:14.849511 2136755 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:08:15.053291 2136755 system_pods.go:86] 6 kube-system pods found
	I0819 13:08:15.053322 2136755 system_pods.go:89] "coredns-6f6b679f8f-5qqtm" [5797c960-9b94-4164-90c5-f0b89084c23a] Running
	I0819 13:08:15.053330 2136755 system_pods.go:89] "etcd-pause-698949" [42e6e52d-9c13-4152-9fd7-b745b28078d9] Running
	I0819 13:08:15.053336 2136755 system_pods.go:89] "kube-apiserver-pause-698949" [8f1c7b7b-3cf2-461a-87b3-5c4a2b865f21] Running
	I0819 13:08:15.053341 2136755 system_pods.go:89] "kube-controller-manager-pause-698949" [e997a844-c93f-43bc-9c58-f6782c0df3a2] Running
	I0819 13:08:15.053346 2136755 system_pods.go:89] "kube-proxy-5xzwk" [5b6aacd5-988c-4114-b95d-768961297985] Running
	I0819 13:08:15.053351 2136755 system_pods.go:89] "kube-scheduler-pause-698949" [37bc080f-8d60-45ca-b3e1-05ce1acd85d6] Running
	I0819 13:08:15.053361 2136755 system_pods.go:126] duration metric: took 203.843398ms to wait for k8s-apps to be running ...
	I0819 13:08:15.053380 2136755 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:08:15.053449 2136755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:08:15.081710 2136755 system_svc.go:56] duration metric: took 28.317442ms WaitForService to wait for kubelet
	I0819 13:08:15.081759 2136755 kubeadm.go:582] duration metric: took 3.163860746s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:08:15.081783 2136755 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:08:15.250321 2136755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:08:15.250354 2136755 node_conditions.go:123] node cpu capacity is 2
	I0819 13:08:15.250370 2136755 node_conditions.go:105] duration metric: took 168.579956ms to run NodePressure ...
	I0819 13:08:15.250385 2136755 start.go:241] waiting for startup goroutines ...
	I0819 13:08:15.250395 2136755 start.go:246] waiting for cluster config update ...
	I0819 13:08:15.250405 2136755 start.go:255] writing updated cluster config ...
	I0819 13:08:15.250800 2136755 ssh_runner.go:195] Run: rm -f paused
	I0819 13:08:15.308035 2136755 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:08:15.310037 2136755 out.go:177] * Done! kubectl is now configured to use "pause-698949" cluster and "default" namespace by default
	I0819 13:08:10.959461 2137182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:08:10.976853 2137182 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337 for IP: 192.168.50.233
	I0819 13:08:10.976882 2137182 certs.go:194] generating shared ca certs ...
	I0819 13:08:10.976904 2137182 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:08:10.977098 2137182 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:08:10.977143 2137182 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:08:10.977152 2137182 certs.go:256] generating profile certs ...
	I0819 13:08:10.977238 2137182 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/client.key
	I0819 13:08:10.977292 2137182 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/apiserver.key.15d11da9
	I0819 13:08:10.977326 2137182 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/proxy-client.key
	I0819 13:08:10.977468 2137182 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:08:10.977502 2137182 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:08:10.977517 2137182 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:08:10.977542 2137182 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:08:10.977567 2137182 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:08:10.977587 2137182 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:08:10.977627 2137182 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:08:10.978265 2137182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:08:11.004584 2137182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:08:11.029759 2137182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:08:11.055341 2137182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:08:11.079957 2137182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 13:08:11.105374 2137182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:08:11.130965 2137182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:08:11.155764 2137182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kubernetes-upgrade-227337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:08:11.180212 2137182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:08:11.208539 2137182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:08:11.239131 2137182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:08:11.265827 2137182 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:08:11.283752 2137182 ssh_runner.go:195] Run: openssl version
	I0819 13:08:11.289490 2137182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:08:11.300445 2137182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:08:11.304989 2137182 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:08:11.305044 2137182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:08:11.310749 2137182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:08:11.321644 2137182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:08:11.333505 2137182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:08:11.338008 2137182 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:08:11.338065 2137182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:08:11.343662 2137182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:08:11.354194 2137182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:08:11.437635 2137182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:08:11.484446 2137182 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:08:11.484539 2137182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:08:11.518467 2137182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:08:11.601195 2137182 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:08:11.653595 2137182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:08:11.743339 2137182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:08:11.799667 2137182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:08:11.830206 2137182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:08:11.972857 2137182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:08:12.005193 2137182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:08:12.041453 2137182 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-227337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-227337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.233 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:08:12.041597 2137182 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:08:12.041693 2137182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:08:12.281055 2137182 cri.go:89] found id: "a6a4571b47640282b3fd951904b9e95130fd5afef51cf9fcb7211e5c16eae3eb"
	I0819 13:08:12.281088 2137182 cri.go:89] found id: "19257bcdb0f07e067525dfa287d11ba0e23334f386c8b0639f688697ea93a2f6"
	I0819 13:08:12.281094 2137182 cri.go:89] found id: "98612c5bc3fd2d28d8b35dce2e923298a1627ee02fda1ad6521f504d17484d40"
	I0819 13:08:12.281099 2137182 cri.go:89] found id: "41919a85227a5d7fc06a55f2c542caba7719bf315403f8fad4bbabf59aa0c827"
	I0819 13:08:12.281104 2137182 cri.go:89] found id: "9a8971636feb40a6be92f3ac2cd3934b1def5a7038ee47c1ad33c65b2979c66b"
	I0819 13:08:12.281109 2137182 cri.go:89] found id: "fb3fb006cf8fbfe74d4a023406f5efa52c1f1e0453967d479fdec057ce154a5b"
	I0819 13:08:12.281112 2137182 cri.go:89] found id: "1919c76e59b2f8244f0cf79f398d758e9bad07562bf0a2577c7f4c3794cddaae"
	I0819 13:08:12.281116 2137182 cri.go:89] found id: "d5a93e9c6fd09ce14d66ccdcf763eb1b206a363e5981362e8e8572cd6e0dee3a"
	I0819 13:08:12.281120 2137182 cri.go:89] found id: "a751ca991f1e66d1f2d680b7dd41fe23c8735d4cf8f3b8214d7828176c151c90"
	I0819 13:08:12.281131 2137182 cri.go:89] found id: ""
	I0819 13:08:12.281188 2137182 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.305154921Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b9e8a7f95c456897d3c633e0b25c41b8b48d221aa1df752a5c9a6b125931fb3c,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-5qqtm,Uid:5797c960-9b94-4164-90c5-f0b89084c23a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724072861830463673,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T13:06:43.505906034Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-698949,Uid:abf3663a182e203a855f90d849323b4b,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1724072861526108378,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.221:8443,kubernetes.io/config.hash: abf3663a182e203a855f90d849323b4b,kubernetes.io/config.seen: 2024-08-19T13:06:38.065571738Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a1728486cc7e06e1bae0434a799a23e4a02e6aa4e26f17ae9ed01ddf88fc0671,Metadata:&PodSandboxMetadata{Name:kube-proxy-5xzwk,Uid:5b6aacd5-988c-4114-b95d-768961297985,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724072861490397909,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T13:06:43.347154164Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&PodSandboxMetadata{Name:etcd-pause-698949,Uid:ad5a8791af5b937c21ed3014b6a39af8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724072861475072336,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.221:2379,kubernetes.io/config.hash: ad5a8791af5b937c21ed3014b6a39af8,kubernetes.io/config.seen: 2024-08-19T13:06:38.065567489Z,kubernetes.io/config.source: file,},RuntimeH
andler:,},&PodSandbox{Id:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-698949,Uid:339d7b0c589af675a2204deded697e0a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724072861457390366,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 339d7b0c589af675a2204deded697e0a,kubernetes.io/config.seen: 2024-08-19T13:06:38.065573216Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-698949,Uid:23a78573af93162d451bb71cb6cb100b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724072861437639153,Lab
els:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 23a78573af93162d451bb71cb6cb100b,kubernetes.io/config.seen: 2024-08-19T13:06:38.065573995Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:92819c2c42c090c823a1fea410bcc3c5b5445a318ce6983d41f5668911d29ee9,Metadata:&PodSandboxMetadata{Name:kube-proxy-5xzwk,Uid:5b6aacd5-988c-4114-b95d-768961297985,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724072805161724876,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-08-19T13:06:43.347154164Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fc6a22e9edf3f6cdb0536fcd2e3dbab8a9e3840d4c8b6a0b3a994b4f3cbb68ee,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-5qqtm,Uid:5797c960-9b94-4164-90c5-f0b89084c23a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724072804430088260,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T13:06:43.505906034Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9ec19494-c7e1-4e08-83c6-a0050cfce900 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.306338305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3dd159d1-34ef-492e-907e-9ffb9817cd79 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.306426648Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3dd159d1-34ef-492e-907e-9ffb9817cd79 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.307888439Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52ec1fdc7cc5ff1943cd55eb994800c06499e8af8a1665a26384555118e269ad,PodSandboxId:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724072876550752569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b030d2722cd7fc7cbf3079fb8772452eb3cfc08aba7f23e8307084ef9a18b7e,PodSandboxId:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724072876549752176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eee737a34c7c252b327b0113c8d573d8968bb3cf5ba8c3cc6c791aa46671ff9,PodSandboxId:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724072876513239944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8437ecd909ca7868fd0468b0afdb4bd6239bb36244d3be08b830ca411d6c5e1,PodSandboxId:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724072876509182002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53a3557493dc56c9d6bd1d430a25bc76a3339be0db3a06f9eeebbeb5bf372c9,PodSandboxId:a1728486cc7e06e1bae0434a799a23e4a02e6aa4e26f17ae9ed01ddf88fc0671,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724072862039751651,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7065181391fb90ff9971305427aa3a65d0e3ba939fefd9ef3ebdeded8fe42b7c,PodSandboxId:b9e8a7f95c456897d3c633e0b25c41b8b48d221aa1df752a5c9a6b125931fb3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724072862696629997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29a695a772bf0efa612fd6660095bb298c261c5690e3c703e997accea49e986,PodSandboxId:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724072862044758413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fa86322f82d3f432353651add895d325fee24f5e284a9bee2086cc5ca8fe8e,PodSandboxId:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724072861932188234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4609c6d508aea5ce32d9cf4679a41b03f257e93f73573f33b1e52b679e3a817b,PodSandboxId:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724072861833037609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:632b98832078d30600ae52a6ef430ddff7a359a6d6169b8320a77f760bf5c6d5,PodSandboxId:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724072861843910781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5655595347a7b4aef04e94692999706f5473f4b0d8b7ec275657af57a8d0c4b2,PodSandboxId:92819c2c42c090c823a1fea410bcc3c5b5445a318ce6983d41f5668911d29ee9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724072805285887534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1eb2b7e83657df282c0d42fe8bd7b6ecdb7628e73c2cd3b855038bb752b59c,PodSandboxId:fc6a22e9edf3f6cdb0536fcd2e3dbab8a9e3840d4c8b6a0b3a994b4f3cbb68ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724072804650992022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3dd159d1-34ef-492e-907e-9ffb9817cd79 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.341812982Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94973c41-921f-4994-9401-47d34f2f281c name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.341992779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94973c41-921f-4994-9401-47d34f2f281c name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.343998057Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8416ccb5-28c6-407f-8419-ac69cc53eef9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.344482658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072898344446618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8416ccb5-28c6-407f-8419-ac69cc53eef9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.345240512Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91e557ba-96a5-4851-865a-972ffcfe8c27 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.345347621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91e557ba-96a5-4851-865a-972ffcfe8c27 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.345756800Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52ec1fdc7cc5ff1943cd55eb994800c06499e8af8a1665a26384555118e269ad,PodSandboxId:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724072876550752569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b030d2722cd7fc7cbf3079fb8772452eb3cfc08aba7f23e8307084ef9a18b7e,PodSandboxId:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724072876549752176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eee737a34c7c252b327b0113c8d573d8968bb3cf5ba8c3cc6c791aa46671ff9,PodSandboxId:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724072876513239944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8437ecd909ca7868fd0468b0afdb4bd6239bb36244d3be08b830ca411d6c5e1,PodSandboxId:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724072876509182002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53a3557493dc56c9d6bd1d430a25bc76a3339be0db3a06f9eeebbeb5bf372c9,PodSandboxId:a1728486cc7e06e1bae0434a799a23e4a02e6aa4e26f17ae9ed01ddf88fc0671,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724072862039751651,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7065181391fb90ff9971305427aa3a65d0e3ba939fefd9ef3ebdeded8fe42b7c,PodSandboxId:b9e8a7f95c456897d3c633e0b25c41b8b48d221aa1df752a5c9a6b125931fb3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724072862696629997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29a695a772bf0efa612fd6660095bb298c261c5690e3c703e997accea49e986,PodSandboxId:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724072862044758413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fa86322f82d3f432353651add895d325fee24f5e284a9bee2086cc5ca8fe8e,PodSandboxId:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724072861932188234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4609c6d508aea5ce32d9cf4679a41b03f257e93f73573f33b1e52b679e3a817b,PodSandboxId:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724072861833037609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:632b98832078d30600ae52a6ef430ddff7a359a6d6169b8320a77f760bf5c6d5,PodSandboxId:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724072861843910781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5655595347a7b4aef04e94692999706f5473f4b0d8b7ec275657af57a8d0c4b2,PodSandboxId:92819c2c42c090c823a1fea410bcc3c5b5445a318ce6983d41f5668911d29ee9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724072805285887534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1eb2b7e83657df282c0d42fe8bd7b6ecdb7628e73c2cd3b855038bb752b59c,PodSandboxId:fc6a22e9edf3f6cdb0536fcd2e3dbab8a9e3840d4c8b6a0b3a994b4f3cbb68ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724072804650992022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91e557ba-96a5-4851-865a-972ffcfe8c27 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.399990738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0284cb75-bdcd-4ba5-975d-60ded141f6ee name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.400063851Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0284cb75-bdcd-4ba5-975d-60ded141f6ee name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.401211132Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1514d5b5-dcbf-4ec6-b9b7-a3f52d6ae307 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.401599874Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072898401577153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1514d5b5-dcbf-4ec6-b9b7-a3f52d6ae307 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.402226040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b933cdff-ef91-4232-96b4-9fbf6b89408a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.402277324Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b933cdff-ef91-4232-96b4-9fbf6b89408a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.402505180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52ec1fdc7cc5ff1943cd55eb994800c06499e8af8a1665a26384555118e269ad,PodSandboxId:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724072876550752569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b030d2722cd7fc7cbf3079fb8772452eb3cfc08aba7f23e8307084ef9a18b7e,PodSandboxId:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724072876549752176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eee737a34c7c252b327b0113c8d573d8968bb3cf5ba8c3cc6c791aa46671ff9,PodSandboxId:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724072876513239944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8437ecd909ca7868fd0468b0afdb4bd6239bb36244d3be08b830ca411d6c5e1,PodSandboxId:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724072876509182002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53a3557493dc56c9d6bd1d430a25bc76a3339be0db3a06f9eeebbeb5bf372c9,PodSandboxId:a1728486cc7e06e1bae0434a799a23e4a02e6aa4e26f17ae9ed01ddf88fc0671,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724072862039751651,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7065181391fb90ff9971305427aa3a65d0e3ba939fefd9ef3ebdeded8fe42b7c,PodSandboxId:b9e8a7f95c456897d3c633e0b25c41b8b48d221aa1df752a5c9a6b125931fb3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724072862696629997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29a695a772bf0efa612fd6660095bb298c261c5690e3c703e997accea49e986,PodSandboxId:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724072862044758413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fa86322f82d3f432353651add895d325fee24f5e284a9bee2086cc5ca8fe8e,PodSandboxId:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724072861932188234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4609c6d508aea5ce32d9cf4679a41b03f257e93f73573f33b1e52b679e3a817b,PodSandboxId:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724072861833037609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:632b98832078d30600ae52a6ef430ddff7a359a6d6169b8320a77f760bf5c6d5,PodSandboxId:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724072861843910781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5655595347a7b4aef04e94692999706f5473f4b0d8b7ec275657af57a8d0c4b2,PodSandboxId:92819c2c42c090c823a1fea410bcc3c5b5445a318ce6983d41f5668911d29ee9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724072805285887534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1eb2b7e83657df282c0d42fe8bd7b6ecdb7628e73c2cd3b855038bb752b59c,PodSandboxId:fc6a22e9edf3f6cdb0536fcd2e3dbab8a9e3840d4c8b6a0b3a994b4f3cbb68ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724072804650992022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b933cdff-ef91-4232-96b4-9fbf6b89408a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.456359627Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b284e595-f43e-435c-a5e5-0a9de3c70226 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.456464985Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b284e595-f43e-435c-a5e5-0a9de3c70226 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.458145269Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f8fd1f1-f80c-4336-892e-dd71f2854fdd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.458782592Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072898458661058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f8fd1f1-f80c-4336-892e-dd71f2854fdd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.459469718Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=936a7a60-9dab-46da-b697-902274bbbc9d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.459575144Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=936a7a60-9dab-46da-b697-902274bbbc9d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:08:18 pause-698949 crio[2119]: time="2024-08-19 13:08:18.460017060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52ec1fdc7cc5ff1943cd55eb994800c06499e8af8a1665a26384555118e269ad,PodSandboxId:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724072876550752569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b030d2722cd7fc7cbf3079fb8772452eb3cfc08aba7f23e8307084ef9a18b7e,PodSandboxId:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724072876549752176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eee737a34c7c252b327b0113c8d573d8968bb3cf5ba8c3cc6c791aa46671ff9,PodSandboxId:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724072876513239944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8437ecd909ca7868fd0468b0afdb4bd6239bb36244d3be08b830ca411d6c5e1,PodSandboxId:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724072876509182002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53a3557493dc56c9d6bd1d430a25bc76a3339be0db3a06f9eeebbeb5bf372c9,PodSandboxId:a1728486cc7e06e1bae0434a799a23e4a02e6aa4e26f17ae9ed01ddf88fc0671,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724072862039751651,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7065181391fb90ff9971305427aa3a65d0e3ba939fefd9ef3ebdeded8fe42b7c,PodSandboxId:b9e8a7f95c456897d3c633e0b25c41b8b48d221aa1df752a5c9a6b125931fb3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724072862696629997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29a695a772bf0efa612fd6660095bb298c261c5690e3c703e997accea49e986,PodSandboxId:3a7e92f42670753afbdc7cc043663834722515aeaa468922f770af1cf4d302e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724072862044758413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5a8791af5b937c21ed3014b6a39af8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fa86322f82d3f432353651add895d325fee24f5e284a9bee2086cc5ca8fe8e,PodSandboxId:4441563408cc2f82f2d78227c97d1dee8cd5a7e623b16659e03e418cad994bca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724072861932188234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3663a182e203a855f90d849323b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4609c6d508aea5ce32d9cf4679a41b03f257e93f73573f33b1e52b679e3a817b,PodSandboxId:123519786b99773723c1d983c0eec621fe1d27cb1228d256e7376cacb44b148f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724072861833037609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23a78573af93162d451bb71cb6cb100b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:632b98832078d30600ae52a6ef430ddff7a359a6d6169b8320a77f760bf5c6d5,PodSandboxId:7ef547908afeb84d065c64d9f9de31fbfb1584b506ece6441ed798be6a3085fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724072861843910781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-698949,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339d7b0c589af675a2204deded697e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5655595347a7b4aef04e94692999706f5473f4b0d8b7ec275657af57a8d0c4b2,PodSandboxId:92819c2c42c090c823a1fea410bcc3c5b5445a318ce6983d41f5668911d29ee9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724072805285887534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xzwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b6aacd5-988c-4114-b95d-768961297985,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1eb2b7e83657df282c0d42fe8bd7b6ecdb7628e73c2cd3b855038bb752b59c,PodSandboxId:fc6a22e9edf3f6cdb0536fcd2e3dbab8a9e3840d4c8b6a0b3a994b4f3cbb68ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724072804650992022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5qqtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5797c960-9b94-4164-90c5-f0b89084c23a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=936a7a60-9dab-46da-b697-902274bbbc9d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	52ec1fdc7cc5f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   21 seconds ago       Running             etcd                      2                   3a7e92f426707       etcd-pause-698949
	2b030d2722cd7       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   21 seconds ago       Running             kube-controller-manager   2                   7ef547908afeb       kube-controller-manager-pause-698949
	1eee737a34c7c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   22 seconds ago       Running             kube-scheduler            2                   123519786b997       kube-scheduler-pause-698949
	d8437ecd909ca       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   22 seconds ago       Running             kube-apiserver            2                   4441563408cc2       kube-apiserver-pause-698949
	7065181391fb9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   35 seconds ago       Running             coredns                   1                   b9e8a7f95c456       coredns-6f6b679f8f-5qqtm
	c29a695a772bf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   36 seconds ago       Exited              etcd                      1                   3a7e92f426707       etcd-pause-698949
	d53a3557493dc       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   36 seconds ago       Running             kube-proxy                1                   a1728486cc7e0       kube-proxy-5xzwk
	c2fa86322f82d       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   36 seconds ago       Exited              kube-apiserver            1                   4441563408cc2       kube-apiserver-pause-698949
	632b98832078d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   36 seconds ago       Exited              kube-controller-manager   1                   7ef547908afeb       kube-controller-manager-pause-698949
	4609c6d508aea       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   36 seconds ago       Exited              kube-scheduler            1                   123519786b997       kube-scheduler-pause-698949
	5655595347a7b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   About a minute ago   Exited              kube-proxy                0                   92819c2c42c09       kube-proxy-5xzwk
	0a1eb2b7e8365       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   fc6a22e9edf3f       coredns-6f6b679f8f-5qqtm
	
	
	==> coredns [0a1eb2b7e83657df282c0d42fe8bd7b6ecdb7628e73c2cd3b855038bb752b59c] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1200762963]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 13:06:44.819) (total time: 30003ms):
	Trace[1200762963]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (13:07:14.822)
	Trace[1200762963]: [30.003744548s] [30.003744548s] END
	[INFO] plugin/kubernetes: Trace[613852929]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 13:06:44.819) (total time: 30003ms):
	Trace[613852929]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (13:07:14.822)
	Trace[613852929]: [30.003889183s] [30.003889183s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1386248915]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 13:06:44.821) (total time: 30002ms):
	Trace[1386248915]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (13:07:14.822)
	Trace[1386248915]: [30.002172396s] [30.002172396s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:43131 - 2465 "HINFO IN 4822256440033380775.1684270965545853753. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013097064s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7065181391fb90ff9971305427aa3a65d0e3ba939fefd9ef3ebdeded8fe42b7c] <==
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49027 - 24886 "HINFO IN 2718717822102568054.4782735869707459045. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009289096s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55076->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[914264663]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 13:07:43.131) (total time: 10884ms):
	Trace[914264663]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55076->10.96.0.1:443: read: connection reset by peer 10884ms (13:07:54.016)
	Trace[914264663]: [10.884575119s] [10.884575119s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55076->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55060->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[392737611]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 13:07:43.131) (total time: 10885ms):
	Trace[392737611]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55060->10.96.0.1:443: read: connection reset by peer 10885ms (13:07:54.016)
	Trace[392737611]: [10.885789368s] [10.885789368s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55060->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55088->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1948212645]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 13:07:43.131) (total time: 10887ms):
	Trace[1948212645]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55088->10.96.0.1:443: read: connection reset by peer 10887ms (13:07:54.019)
	Trace[1948212645]: [10.887729515s] [10.887729515s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:55088->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-698949
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-698949
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=pause-698949
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T13_06_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 13:06:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-698949
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 13:08:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 13:07:59 +0000   Mon, 19 Aug 2024 13:06:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 13:07:59 +0000   Mon, 19 Aug 2024 13:06:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 13:07:59 +0000   Mon, 19 Aug 2024 13:06:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 13:07:59 +0000   Mon, 19 Aug 2024 13:06:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.221
	  Hostname:    pause-698949
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 9eefdb61dd00488ab1292822794d10e5
	  System UUID:                9eefdb61-dd00-488a-b129-2822794d10e5
	  Boot ID:                    0fd418ca-ba1b-4f09-9d18-cad073bf18f8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-5qqtm                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     95s
	  kube-system                 etcd-pause-698949                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         101s
	  kube-system                 kube-apiserver-pause-698949             250m (12%)    0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-controller-manager-pause-698949    200m (10%)    0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-proxy-5xzwk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-scheduler-pause-698949             100m (5%)     0 (0%)      0 (0%)           0 (0%)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 93s                  kube-proxy       
	  Normal  Starting                 19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  106s (x8 over 106s)  kubelet          Node pause-698949 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x8 over 106s)  kubelet          Node pause-698949 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x7 over 106s)  kubelet          Node pause-698949 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    100s                 kubelet          Node pause-698949 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  100s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  100s                 kubelet          Node pause-698949 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     100s                 kubelet          Node pause-698949 status is now: NodeHasSufficientPID
	  Normal  Starting                 100s                 kubelet          Starting kubelet.
	  Normal  NodeReady                99s                  kubelet          Node pause-698949 status is now: NodeReady
	  Normal  RegisteredNode           96s                  node-controller  Node pause-698949 event: Registered Node pause-698949 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node pause-698949 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node pause-698949 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node pause-698949 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                  node-controller  Node pause-698949 event: Registered Node pause-698949 in Controller
	
	
	==> dmesg <==
	[  +9.088019] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.064102] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078572] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.221212] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.151516] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.326355] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.405437] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.060030] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.088669] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.063241] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.013380] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.073821] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.302705] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.117095] kauditd_printk_skb: 21 callbacks suppressed
	[Aug19 13:07] kauditd_printk_skb: 67 callbacks suppressed
	[ +14.148963] systemd-fstab-generator[2038]: Ignoring "noauto" option for root device
	[  +0.150255] systemd-fstab-generator[2050]: Ignoring "noauto" option for root device
	[  +0.171012] systemd-fstab-generator[2064]: Ignoring "noauto" option for root device
	[  +0.139210] systemd-fstab-generator[2076]: Ignoring "noauto" option for root device
	[  +0.298919] systemd-fstab-generator[2104]: Ignoring "noauto" option for root device
	[  +1.245187] systemd-fstab-generator[2231]: Ignoring "noauto" option for root device
	[ +12.536729] kauditd_printk_skb: 196 callbacks suppressed
	[  +2.074690] systemd-fstab-generator[2974]: Ignoring "noauto" option for root device
	[  +4.101960] kauditd_printk_skb: 39 callbacks suppressed
	[Aug19 13:08] systemd-fstab-generator[3328]: Ignoring "noauto" option for root device
	
	
	==> etcd [52ec1fdc7cc5ff1943cd55eb994800c06499e8af8a1665a26384555118e269ad] <==
	{"level":"info","ts":"2024-08-19T13:07:57.179975Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c75d0b2482cd9027","local-member-id":"e7b0d5fc33cf92f8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:07:57.180019Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:07:57.178999Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-19T13:07:57.187047Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T13:07:57.190068Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e7b0d5fc33cf92f8","initial-advertise-peer-urls":["https://192.168.39.221:2380"],"listen-peer-urls":["https://192.168.39.221:2380"],"advertise-client-urls":["https://192.168.39.221:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.221:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T13:07:57.190195Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T13:07:57.187073Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.221:2380"}
	{"level":"info","ts":"2024-08-19T13:07:57.190258Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.221:2380"}
	{"level":"info","ts":"2024-08-19T13:07:58.213314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T13:07:58.213436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T13:07:58.213484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 received MsgPreVoteResp from e7b0d5fc33cf92f8 at term 2"}
	{"level":"info","ts":"2024-08-19T13:07:58.213527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T13:07:58.213552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 received MsgVoteResp from e7b0d5fc33cf92f8 at term 3"}
	{"level":"info","ts":"2024-08-19T13:07:58.213578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T13:07:58.213603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e7b0d5fc33cf92f8 elected leader e7b0d5fc33cf92f8 at term 3"}
	{"level":"info","ts":"2024-08-19T13:07:58.218863Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e7b0d5fc33cf92f8","local-member-attributes":"{Name:pause-698949 ClientURLs:[https://192.168.39.221:2379]}","request-path":"/0/members/e7b0d5fc33cf92f8/attributes","cluster-id":"c75d0b2482cd9027","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T13:07:58.219410Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:07:58.221927Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:07:58.222767Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T13:07:58.222819Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T13:07:58.224247Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:07:58.225397Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:07:58.226094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.221:2379"}
	{"level":"info","ts":"2024-08-19T13:07:58.226796Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T13:08:01.491274Z","caller":"traceutil/trace.go:171","msg":"trace[1398600292] transaction","detail":"{read_only:false; number_of_response:0; response_revision:477; }","duration":"127.850014ms","start":"2024-08-19T13:08:01.363409Z","end":"2024-08-19T13:08:01.491259Z","steps":["trace[1398600292] 'process raft request'  (duration: 127.643037ms)"],"step_count":1}
	
	
	==> etcd [c29a695a772bf0efa612fd6660095bb298c261c5690e3c703e997accea49e986] <==
	{"level":"warn","ts":"2024-08-19T13:07:42.801144Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-08-19T13:07:42.802748Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.221:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.39.221:2380","--initial-cluster=pause-698949=https://192.168.39.221:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.221:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.221:2380","--name=pause-698949","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-08-19T13:07:42.802917Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-08-19T13:07:42.804788Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-08-19T13:07:42.804853Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.221:2380"]}
	{"level":"info","ts":"2024-08-19T13:07:42.804914Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T13:07:42.807786Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.221:2379"]}
	{"level":"info","ts":"2024-08-19T13:07:42.817020Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-698949","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.221:2380"],"listen-peer-urls":["https://192.168.39.221:2380"],"advertise-client-urls":["https://192.168.39.221:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.221:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cl
uster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	
	
	==> kernel <==
	 13:08:18 up 2 min,  0 users,  load average: 0.40, 0.21, 0.08
	Linux pause-698949 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c2fa86322f82d3f432353651add895d325fee24f5e284a9bee2086cc5ca8fe8e] <==
	I0819 13:07:42.366074       1 options.go:228] external host was not specified, using 192.168.39.221
	I0819 13:07:42.371900       1 server.go:142] Version: v1.31.0
	I0819 13:07:42.371992       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:07:43.401870       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0819 13:07:43.405888       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:43.406030       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0819 13:07:43.418509       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 13:07:43.443654       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 13:07:43.444790       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 13:07:43.445136       1 instance.go:232] Using reconciler: lease
	W0819 13:07:43.448400       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:44.407053       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:44.407134       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:44.449491       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:45.825072       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:46.048879       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:46.141563       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:48.281141       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:48.422959       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:48.823140       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:51.922778       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:07:52.737363       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d8437ecd909ca7868fd0468b0afdb4bd6239bb36244d3be08b830ca411d6c5e1] <==
	I0819 13:07:59.538068       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 13:07:59.537962       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 13:07:59.541302       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 13:07:59.541633       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 13:07:59.543430       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 13:07:59.546582       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0819 13:07:59.554841       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0819 13:07:59.555510       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 13:07:59.563333       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 13:07:59.563386       1 policy_source.go:224] refreshing policies
	I0819 13:07:59.598948       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 13:07:59.601365       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 13:07:59.601513       1 aggregator.go:171] initial CRD sync complete...
	I0819 13:07:59.601531       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 13:07:59.601537       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 13:07:59.601542       1 cache.go:39] Caches are synced for autoregister controller
	I0819 13:07:59.617234       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 13:08:00.441460       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 13:08:01.554260       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 13:08:01.605993       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 13:08:01.679189       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 13:08:01.756357       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 13:08:01.772481       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 13:08:02.872273       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 13:08:02.921177       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2b030d2722cd7fc7cbf3079fb8772452eb3cfc08aba7f23e8307084ef9a18b7e] <==
	I0819 13:08:02.818632       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0819 13:08:02.821052       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0819 13:08:02.826183       1 shared_informer.go:320] Caches are synced for node
	I0819 13:08:02.826388       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0819 13:08:02.826545       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0819 13:08:02.826621       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0819 13:08:02.826645       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0819 13:08:02.826800       1 shared_informer.go:320] Caches are synced for daemon sets
	I0819 13:08:02.826936       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-698949"
	I0819 13:08:02.828590       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0819 13:08:02.828854       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="187.678µs"
	I0819 13:08:02.834533       1 shared_informer.go:320] Caches are synced for deployment
	I0819 13:08:02.899077       1 shared_informer.go:320] Caches are synced for taint
	I0819 13:08:02.900046       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0819 13:08:02.900974       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-698949"
	I0819 13:08:02.901035       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0819 13:08:02.902422       1 shared_informer.go:320] Caches are synced for attach detach
	I0819 13:08:02.969989       1 shared_informer.go:320] Caches are synced for disruption
	I0819 13:08:02.978420       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0819 13:08:03.020283       1 shared_informer.go:320] Caches are synced for crt configmap
	I0819 13:08:03.024250       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 13:08:03.031933       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 13:08:03.471042       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 13:08:03.520347       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 13:08:03.520447       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [632b98832078d30600ae52a6ef430ddff7a359a6d6169b8320a77f760bf5c6d5] <==
	I0819 13:07:43.507561       1 serving.go:386] Generated self-signed cert in-memory
	I0819 13:07:44.034386       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 13:07:44.035774       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:07:44.037560       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 13:07:44.040194       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 13:07:44.040359       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 13:07:44.040441       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [5655595347a7b4aef04e94692999706f5473f4b0d8b7ec275657af57a8d0c4b2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 13:06:45.465556       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 13:06:45.474618       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.221"]
	E0819 13:06:45.474830       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 13:06:45.510023       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 13:06:45.510066       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 13:06:45.510092       1 server_linux.go:169] "Using iptables Proxier"
	I0819 13:06:45.512602       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 13:06:45.512971       1 server.go:483] "Version info" version="v1.31.0"
	I0819 13:06:45.512995       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:06:45.514259       1 config.go:197] "Starting service config controller"
	I0819 13:06:45.514301       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 13:06:45.514322       1 config.go:104] "Starting endpoint slice config controller"
	I0819 13:06:45.514342       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 13:06:45.514920       1 config.go:326] "Starting node config controller"
	I0819 13:06:45.515085       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 13:06:45.614418       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 13:06:45.614561       1 shared_informer.go:320] Caches are synced for service config
	I0819 13:06:45.615236       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d53a3557493dc56c9d6bd1d430a25bc76a3339be0db3a06f9eeebbeb5bf372c9] <==
	 >
	E0819 13:07:43.657130       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 13:07:54.018503       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-698949\": dial tcp 192.168.39.221:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.221:54700->192.168.39.221:8443: read: connection reset by peer"
	E0819 13:07:55.073337       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-698949\": dial tcp 192.168.39.221:8443: connect: connection refused"
	I0819 13:07:59.548199       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.221"]
	E0819 13:07:59.548479       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 13:07:59.619150       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 13:07:59.619232       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 13:07:59.619263       1 server_linux.go:169] "Using iptables Proxier"
	I0819 13:07:59.622416       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 13:07:59.622763       1 server.go:483] "Version info" version="v1.31.0"
	I0819 13:07:59.623397       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:07:59.625496       1 config.go:197] "Starting service config controller"
	I0819 13:07:59.625961       1 config.go:104] "Starting endpoint slice config controller"
	I0819 13:07:59.625995       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 13:07:59.626350       1 config.go:326] "Starting node config controller"
	I0819 13:07:59.626379       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 13:07:59.626641       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 13:07:59.727306       1 shared_informer.go:320] Caches are synced for node config
	I0819 13:07:59.727324       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 13:07:59.728613       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [1eee737a34c7c252b327b0113c8d573d8968bb3cf5ba8c3cc6c791aa46671ff9] <==
	I0819 13:07:57.695132       1 serving.go:386] Generated self-signed cert in-memory
	W0819 13:07:59.508900       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 13:07:59.508998       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 13:07:59.509009       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 13:07:59.509015       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 13:07:59.559105       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 13:07:59.559239       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:07:59.565066       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 13:07:59.565188       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 13:07:59.565224       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 13:07:59.565250       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 13:07:59.666178       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [4609c6d508aea5ce32d9cf4679a41b03f257e93f73573f33b1e52b679e3a817b] <==
	I0819 13:07:43.687146       1 serving.go:386] Generated self-signed cert in-memory
	W0819 13:07:54.016773       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.39.221:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.221:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.221:54722->192.168.39.221:8443: read: connection reset by peer
	W0819 13:07:54.016920       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 13:07:54.016952       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 13:07:54.030557       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 13:07:54.030620       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0819 13:07:54.030655       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0819 13:07:54.033408       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 13:07:54.033469       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 13:07:54.033510       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0819 13:07:54.034005       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0819 13:07:54.034070       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0819 13:07:54.034116       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.271043    2981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/339d7b0c589af675a2204deded697e0a-k8s-certs\") pod \"kube-controller-manager-pause-698949\" (UID: \"339d7b0c589af675a2204deded697e0a\") " pod="kube-system/kube-controller-manager-pause-698949"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.278411    2981 kubelet_node_status.go:72] "Attempting to register node" node="pause-698949"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: E0819 13:07:56.279231    2981 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.221:8443: connect: connection refused" node="pause-698949"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.481355    2981 kubelet_node_status.go:72] "Attempting to register node" node="pause-698949"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: E0819 13:07:56.482255    2981 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.221:8443: connect: connection refused" node="pause-698949"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.486578    2981 scope.go:117] "RemoveContainer" containerID="c2fa86322f82d3f432353651add895d325fee24f5e284a9bee2086cc5ca8fe8e"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.488514    2981 scope.go:117] "RemoveContainer" containerID="4609c6d508aea5ce32d9cf4679a41b03f257e93f73573f33b1e52b679e3a817b"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.492142    2981 scope.go:117] "RemoveContainer" containerID="632b98832078d30600ae52a6ef430ddff7a359a6d6169b8320a77f760bf5c6d5"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.496887    2981 scope.go:117] "RemoveContainer" containerID="c29a695a772bf0efa612fd6660095bb298c261c5690e3c703e997accea49e986"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: E0819 13:07:56.661285    2981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-698949?timeout=10s\": dial tcp 192.168.39.221:8443: connect: connection refused" interval="800ms"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: I0819 13:07:56.884588    2981 kubelet_node_status.go:72] "Attempting to register node" node="pause-698949"
	Aug 19 13:07:56 pause-698949 kubelet[2981]: E0819 13:07:56.885914    2981 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.221:8443: connect: connection refused" node="pause-698949"
	Aug 19 13:07:57 pause-698949 kubelet[2981]: I0819 13:07:57.687231    2981 kubelet_node_status.go:72] "Attempting to register node" node="pause-698949"
	Aug 19 13:07:59 pause-698949 kubelet[2981]: I0819 13:07:59.589226    2981 kubelet_node_status.go:111] "Node was previously registered" node="pause-698949"
	Aug 19 13:07:59 pause-698949 kubelet[2981]: I0819 13:07:59.589768    2981 kubelet_node_status.go:75] "Successfully registered node" node="pause-698949"
	Aug 19 13:07:59 pause-698949 kubelet[2981]: I0819 13:07:59.589941    2981 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 19 13:07:59 pause-698949 kubelet[2981]: I0819 13:07:59.591759    2981 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 19 13:08:00 pause-698949 kubelet[2981]: I0819 13:08:00.014648    2981 apiserver.go:52] "Watching apiserver"
	Aug 19 13:08:00 pause-698949 kubelet[2981]: I0819 13:08:00.053804    2981 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 19 13:08:00 pause-698949 kubelet[2981]: I0819 13:08:00.116373    2981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b6aacd5-988c-4114-b95d-768961297985-lib-modules\") pod \"kube-proxy-5xzwk\" (UID: \"5b6aacd5-988c-4114-b95d-768961297985\") " pod="kube-system/kube-proxy-5xzwk"
	Aug 19 13:08:00 pause-698949 kubelet[2981]: I0819 13:08:00.117044    2981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b6aacd5-988c-4114-b95d-768961297985-xtables-lock\") pod \"kube-proxy-5xzwk\" (UID: \"5b6aacd5-988c-4114-b95d-768961297985\") " pod="kube-system/kube-proxy-5xzwk"
	Aug 19 13:08:06 pause-698949 kubelet[2981]: E0819 13:08:06.183770    2981 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072886183242433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:08:06 pause-698949 kubelet[2981]: E0819 13:08:06.183793    2981 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072886183242433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:08:16 pause-698949 kubelet[2981]: E0819 13:08:16.185986    2981 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072896185174740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:08:16 pause-698949 kubelet[2981]: E0819 13:08:16.186010    2981 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724072896185174740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:08:17.978561 2139194 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19479-2086935/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-698949 -n pause-698949
helpers_test.go:261: (dbg) Run:  kubectl --context pause-698949 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (53.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (296.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-620467 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-620467 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m56.252778047s)

                                                
                                                
-- stdout --
	* [old-k8s-version-620467] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-620467" primary control-plane node in "old-k8s-version-620467" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:10:10.891118 2144521 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:10:10.891266 2144521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:10:10.891279 2144521 out.go:358] Setting ErrFile to fd 2...
	I0819 13:10:10.891286 2144521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:10:10.891577 2144521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:10:10.892698 2144521 out.go:352] Setting JSON to false
	I0819 13:10:10.894456 2144521 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":165161,"bootTime":1723907850,"procs":341,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:10:10.894559 2144521 start.go:139] virtualization: kvm guest
	I0819 13:10:10.896233 2144521 out.go:177] * [old-k8s-version-620467] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:10:10.897766 2144521 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:10:10.897786 2144521 notify.go:220] Checking for updates...
	I0819 13:10:10.900139 2144521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:10:10.901356 2144521 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:10:10.902554 2144521 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:10:10.903683 2144521 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:10:10.904943 2144521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:10:10.906770 2144521 config.go:182] Loaded profile config "bridge-649777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:10:10.906894 2144521 config.go:182] Loaded profile config "enable-default-cni-649777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:10:10.907010 2144521 config.go:182] Loaded profile config "flannel-649777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:10:10.907155 2144521 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:10:10.953412 2144521 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 13:10:10.954774 2144521 start.go:297] selected driver: kvm2
	I0819 13:10:10.954808 2144521 start.go:901] validating driver "kvm2" against <nil>
	I0819 13:10:10.954826 2144521 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:10:10.955719 2144521 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:10:10.955830 2144521 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:10:10.975603 2144521 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:10:10.975653 2144521 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 13:10:10.975900 2144521 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:10:10.975945 2144521 cni.go:84] Creating CNI manager for ""
	I0819 13:10:10.975956 2144521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:10:10.975977 2144521 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 13:10:10.976069 2144521 start.go:340] cluster config:
	{Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:10:10.976217 2144521 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:10:10.977671 2144521 out.go:177] * Starting "old-k8s-version-620467" primary control-plane node in "old-k8s-version-620467" cluster
	I0819 13:10:10.978848 2144521 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 13:10:10.978902 2144521 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:10:10.978917 2144521 cache.go:56] Caching tarball of preloaded images
	I0819 13:10:10.979042 2144521 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:10:10.979069 2144521 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 13:10:10.979200 2144521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/config.json ...
	I0819 13:10:10.979233 2144521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/config.json: {Name:mk8ccec52eda9928ea19083da6c7f4f5ca2a9ac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:10:10.979454 2144521 start.go:360] acquireMachinesLock for old-k8s-version-620467: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:10:36.369330 2144521 start.go:364] duration metric: took 25.389821096s to acquireMachinesLock for "old-k8s-version-620467"
	I0819 13:10:36.369422 2144521 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:10:36.369541 2144521 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 13:10:36.371457 2144521 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 13:10:36.371644 2144521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:10:36.371707 2144521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:10:36.388994 2144521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39045
	I0819 13:10:36.389457 2144521 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:10:36.390083 2144521 main.go:141] libmachine: Using API Version  1
	I0819 13:10:36.390108 2144521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:10:36.390463 2144521 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:10:36.390676 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:10:36.390835 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:10:36.390997 2144521 start.go:159] libmachine.API.Create for "old-k8s-version-620467" (driver="kvm2")
	I0819 13:10:36.391029 2144521 client.go:168] LocalClient.Create starting
	I0819 13:10:36.391067 2144521 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem
	I0819 13:10:36.391104 2144521 main.go:141] libmachine: Decoding PEM data...
	I0819 13:10:36.391125 2144521 main.go:141] libmachine: Parsing certificate...
	I0819 13:10:36.391193 2144521 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem
	I0819 13:10:36.391220 2144521 main.go:141] libmachine: Decoding PEM data...
	I0819 13:10:36.391237 2144521 main.go:141] libmachine: Parsing certificate...
	I0819 13:10:36.391268 2144521 main.go:141] libmachine: Running pre-create checks...
	I0819 13:10:36.391282 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .PreCreateCheck
	I0819 13:10:36.391617 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetConfigRaw
	I0819 13:10:36.392053 2144521 main.go:141] libmachine: Creating machine...
	I0819 13:10:36.392069 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .Create
	I0819 13:10:36.392245 2144521 main.go:141] libmachine: (old-k8s-version-620467) Creating KVM machine...
	I0819 13:10:36.393385 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found existing default KVM network
	I0819 13:10:36.394643 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:36.394477 2146017 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015710}
	I0819 13:10:36.394674 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | created network xml: 
	I0819 13:10:36.394688 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | <network>
	I0819 13:10:36.394698 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG |   <name>mk-old-k8s-version-620467</name>
	I0819 13:10:36.394709 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG |   <dns enable='no'/>
	I0819 13:10:36.394723 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG |   
	I0819 13:10:36.394738 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 13:10:36.394749 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG |     <dhcp>
	I0819 13:10:36.394762 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 13:10:36.394773 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG |     </dhcp>
	I0819 13:10:36.394789 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG |   </ip>
	I0819 13:10:36.394804 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG |   
	I0819 13:10:36.394818 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | </network>
	I0819 13:10:36.394829 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | 
	I0819 13:10:36.399973 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | trying to create private KVM network mk-old-k8s-version-620467 192.168.39.0/24...
	I0819 13:10:36.473133 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | private KVM network mk-old-k8s-version-620467 192.168.39.0/24 created
	I0819 13:10:36.473165 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:36.473090 2146017 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:10:36.473178 2144521 main.go:141] libmachine: (old-k8s-version-620467) Setting up store path in /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467 ...
	I0819 13:10:36.473196 2144521 main.go:141] libmachine: (old-k8s-version-620467) Building disk image from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 13:10:36.473210 2144521 main.go:141] libmachine: (old-k8s-version-620467) Downloading /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 13:10:36.747122 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:36.746971 2146017 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa...
	I0819 13:10:37.004759 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:37.004622 2146017 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/old-k8s-version-620467.rawdisk...
	I0819 13:10:37.004797 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | Writing magic tar header
	I0819 13:10:37.004821 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | Writing SSH key tar header
	I0819 13:10:37.004840 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:37.004753 2146017 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467 ...
	I0819 13:10:37.004942 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467
	I0819 13:10:37.004976 2144521 main.go:141] libmachine: (old-k8s-version-620467) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467 (perms=drwx------)
	I0819 13:10:37.004993 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines
	I0819 13:10:37.005018 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:10:37.005031 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19479-2086935
	I0819 13:10:37.005050 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 13:10:37.005062 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | Checking permissions on dir: /home/jenkins
	I0819 13:10:37.005078 2144521 main.go:141] libmachine: (old-k8s-version-620467) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube/machines (perms=drwxr-xr-x)
	I0819 13:10:37.005092 2144521 main.go:141] libmachine: (old-k8s-version-620467) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935/.minikube (perms=drwxr-xr-x)
	I0819 13:10:37.005105 2144521 main.go:141] libmachine: (old-k8s-version-620467) Setting executable bit set on /home/jenkins/minikube-integration/19479-2086935 (perms=drwxrwxr-x)
	I0819 13:10:37.005120 2144521 main.go:141] libmachine: (old-k8s-version-620467) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 13:10:37.005132 2144521 main.go:141] libmachine: (old-k8s-version-620467) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 13:10:37.005158 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | Checking permissions on dir: /home
	I0819 13:10:37.005179 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | Skipping /home - not owner
	I0819 13:10:37.005186 2144521 main.go:141] libmachine: (old-k8s-version-620467) Creating domain...
	I0819 13:10:37.006436 2144521 main.go:141] libmachine: (old-k8s-version-620467) define libvirt domain using xml: 
	I0819 13:10:37.006463 2144521 main.go:141] libmachine: (old-k8s-version-620467) <domain type='kvm'>
	I0819 13:10:37.006475 2144521 main.go:141] libmachine: (old-k8s-version-620467)   <name>old-k8s-version-620467</name>
	I0819 13:10:37.006484 2144521 main.go:141] libmachine: (old-k8s-version-620467)   <memory unit='MiB'>2200</memory>
	I0819 13:10:37.006493 2144521 main.go:141] libmachine: (old-k8s-version-620467)   <vcpu>2</vcpu>
	I0819 13:10:37.006507 2144521 main.go:141] libmachine: (old-k8s-version-620467)   <features>
	I0819 13:10:37.006519 2144521 main.go:141] libmachine: (old-k8s-version-620467)     <acpi/>
	I0819 13:10:37.006526 2144521 main.go:141] libmachine: (old-k8s-version-620467)     <apic/>
	I0819 13:10:37.006534 2144521 main.go:141] libmachine: (old-k8s-version-620467)     <pae/>
	I0819 13:10:37.006560 2144521 main.go:141] libmachine: (old-k8s-version-620467)     
	I0819 13:10:37.006571 2144521 main.go:141] libmachine: (old-k8s-version-620467)   </features>
	I0819 13:10:37.006579 2144521 main.go:141] libmachine: (old-k8s-version-620467)   <cpu mode='host-passthrough'>
	I0819 13:10:37.006591 2144521 main.go:141] libmachine: (old-k8s-version-620467)   
	I0819 13:10:37.006598 2144521 main.go:141] libmachine: (old-k8s-version-620467)   </cpu>
	I0819 13:10:37.006610 2144521 main.go:141] libmachine: (old-k8s-version-620467)   <os>
	I0819 13:10:37.006618 2144521 main.go:141] libmachine: (old-k8s-version-620467)     <type>hvm</type>
	I0819 13:10:37.006627 2144521 main.go:141] libmachine: (old-k8s-version-620467)     <boot dev='cdrom'/>
	I0819 13:10:37.006641 2144521 main.go:141] libmachine: (old-k8s-version-620467)     <boot dev='hd'/>
	I0819 13:10:37.006654 2144521 main.go:141] libmachine: (old-k8s-version-620467)     <bootmenu enable='no'/>
	I0819 13:10:37.006673 2144521 main.go:141] libmachine: (old-k8s-version-620467)   </os>
	I0819 13:10:37.006685 2144521 main.go:141] libmachine: (old-k8s-version-620467)   <devices>
	I0819 13:10:37.006696 2144521 main.go:141] libmachine: (old-k8s-version-620467)     <disk type='file' device='cdrom'>
	I0819 13:10:37.006713 2144521 main.go:141] libmachine: (old-k8s-version-620467)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/boot2docker.iso'/>
	I0819 13:10:37.006727 2144521 main.go:141] libmachine: (old-k8s-version-620467)       <target dev='hdc' bus='scsi'/>
	I0819 13:10:37.006738 2144521 main.go:141] libmachine: (old-k8s-version-620467)       <readonly/>
	I0819 13:10:37.006749 2144521 main.go:141] libmachine: (old-k8s-version-620467)     </disk>
	I0819 13:10:37.006762 2144521 main.go:141] libmachine: (old-k8s-version-620467)     <disk type='file' device='disk'>
	I0819 13:10:37.006775 2144521 main.go:141] libmachine: (old-k8s-version-620467)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 13:10:37.006792 2144521 main.go:141] libmachine: (old-k8s-version-620467)       <source file='/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/old-k8s-version-620467.rawdisk'/>
	I0819 13:10:37.006807 2144521 main.go:141] libmachine: (old-k8s-version-620467)       <target dev='hda' bus='virtio'/>
	I0819 13:10:37.006821 2144521 main.go:141] libmachine: (old-k8s-version-620467)     </disk>
	I0819 13:10:37.006832 2144521 main.go:141] libmachine: (old-k8s-version-620467)     <interface type='network'>
	I0819 13:10:37.006873 2144521 main.go:141] libmachine: (old-k8s-version-620467)       <source network='mk-old-k8s-version-620467'/>
	I0819 13:10:37.006901 2144521 main.go:141] libmachine: (old-k8s-version-620467)       <model type='virtio'/>
	I0819 13:10:37.006912 2144521 main.go:141] libmachine: (old-k8s-version-620467)     </interface>
	I0819 13:10:37.006927 2144521 main.go:141] libmachine: (old-k8s-version-620467)     <interface type='network'>
	I0819 13:10:37.006940 2144521 main.go:141] libmachine: (old-k8s-version-620467)       <source network='default'/>
	I0819 13:10:37.006951 2144521 main.go:141] libmachine: (old-k8s-version-620467)       <model type='virtio'/>
	I0819 13:10:37.006964 2144521 main.go:141] libmachine: (old-k8s-version-620467)     </interface>
	I0819 13:10:37.006975 2144521 main.go:141] libmachine: (old-k8s-version-620467)     <serial type='pty'>
	I0819 13:10:37.006987 2144521 main.go:141] libmachine: (old-k8s-version-620467)       <target port='0'/>
	I0819 13:10:37.007007 2144521 main.go:141] libmachine: (old-k8s-version-620467)     </serial>
	I0819 13:10:37.007021 2144521 main.go:141] libmachine: (old-k8s-version-620467)     <console type='pty'>
	I0819 13:10:37.007046 2144521 main.go:141] libmachine: (old-k8s-version-620467)       <target type='serial' port='0'/>
	I0819 13:10:37.007066 2144521 main.go:141] libmachine: (old-k8s-version-620467)     </console>
	I0819 13:10:37.007076 2144521 main.go:141] libmachine: (old-k8s-version-620467)     <rng model='virtio'>
	I0819 13:10:37.007088 2144521 main.go:141] libmachine: (old-k8s-version-620467)       <backend model='random'>/dev/random</backend>
	I0819 13:10:37.007099 2144521 main.go:141] libmachine: (old-k8s-version-620467)     </rng>
	I0819 13:10:37.007108 2144521 main.go:141] libmachine: (old-k8s-version-620467)     
	I0819 13:10:37.007120 2144521 main.go:141] libmachine: (old-k8s-version-620467)     
	I0819 13:10:37.007131 2144521 main.go:141] libmachine: (old-k8s-version-620467)   </devices>
	I0819 13:10:37.007162 2144521 main.go:141] libmachine: (old-k8s-version-620467) </domain>
	I0819 13:10:37.007186 2144521 main.go:141] libmachine: (old-k8s-version-620467) 
	I0819 13:10:37.011250 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:b4:bc:f9 in network default
	I0819 13:10:37.011785 2144521 main.go:141] libmachine: (old-k8s-version-620467) Ensuring networks are active...
	I0819 13:10:37.011809 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:37.012630 2144521 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network default is active
	I0819 13:10:37.013020 2144521 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network mk-old-k8s-version-620467 is active
	I0819 13:10:37.013685 2144521 main.go:141] libmachine: (old-k8s-version-620467) Getting domain xml...
	I0819 13:10:37.014551 2144521 main.go:141] libmachine: (old-k8s-version-620467) Creating domain...
	I0819 13:10:38.366450 2144521 main.go:141] libmachine: (old-k8s-version-620467) Waiting to get IP...
	I0819 13:10:38.367857 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:38.368600 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:10:38.368631 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:38.368572 2146017 retry.go:31] will retry after 212.183477ms: waiting for machine to come up
	I0819 13:10:38.582190 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:38.583032 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:10:38.583064 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:38.582983 2146017 retry.go:31] will retry after 236.593016ms: waiting for machine to come up
	I0819 13:10:38.821866 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:38.822566 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:10:38.822601 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:38.822526 2146017 retry.go:31] will retry after 454.803905ms: waiting for machine to come up
	I0819 13:10:39.279049 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:39.279510 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:10:39.279533 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:39.279461 2146017 retry.go:31] will retry after 426.014427ms: waiting for machine to come up
	I0819 13:10:39.707161 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:39.707848 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:10:39.707878 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:39.707788 2146017 retry.go:31] will retry after 573.847046ms: waiting for machine to come up
	I0819 13:10:40.283758 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:40.284251 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:10:40.284275 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:40.284200 2146017 retry.go:31] will retry after 587.505245ms: waiting for machine to come up
	I0819 13:10:40.873205 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:40.873746 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:10:40.873783 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:40.873713 2146017 retry.go:31] will retry after 794.536659ms: waiting for machine to come up
	I0819 13:10:41.670329 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:41.670837 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:10:41.670889 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:41.670797 2146017 retry.go:31] will retry after 1.420823705s: waiting for machine to come up
	I0819 13:10:43.093155 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:43.093733 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:10:43.093771 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:43.093686 2146017 retry.go:31] will retry after 1.693966785s: waiting for machine to come up
	I0819 13:10:44.789681 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:44.790259 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:10:44.790313 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:44.790202 2146017 retry.go:31] will retry after 1.854348429s: waiting for machine to come up
	I0819 13:10:46.646583 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:46.647132 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:10:46.647165 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:46.647100 2146017 retry.go:31] will retry after 1.961413612s: waiting for machine to come up
	I0819 13:10:48.610599 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:48.611096 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:10:48.611117 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:48.611014 2146017 retry.go:31] will retry after 2.933791164s: waiting for machine to come up
	I0819 13:10:51.547001 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:51.547475 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:10:51.547495 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:51.547441 2146017 retry.go:31] will retry after 2.765905584s: waiting for machine to come up
	I0819 13:10:54.315285 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:54.315751 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:10:54.315781 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:10:54.315701 2146017 retry.go:31] will retry after 5.191807089s: waiting for machine to come up
	I0819 13:10:59.509240 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:59.509749 2144521 main.go:141] libmachine: (old-k8s-version-620467) Found IP for machine: 192.168.39.238
	I0819 13:10:59.509771 2144521 main.go:141] libmachine: (old-k8s-version-620467) Reserving static IP address...
	I0819 13:10:59.509788 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has current primary IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:59.510113 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"} in network mk-old-k8s-version-620467
	I0819 13:10:59.605058 2144521 main.go:141] libmachine: (old-k8s-version-620467) Reserved static IP address: 192.168.39.238
	I0819 13:10:59.605103 2144521 main.go:141] libmachine: (old-k8s-version-620467) Waiting for SSH to be available...
	I0819 13:10:59.605113 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | Getting to WaitForSSH function...
	I0819 13:10:59.608326 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:59.608793 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:minikube Clientid:01:52:54:00:88:ef:79}
	I0819 13:10:59.608826 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:59.608936 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH client type: external
	I0819 13:10:59.608980 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa (-rw-------)
	I0819 13:10:59.609036 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:10:59.609052 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | About to run SSH command:
	I0819 13:10:59.609100 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | exit 0
	I0819 13:10:59.740375 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | SSH cmd err, output: <nil>: 
	I0819 13:10:59.740667 2144521 main.go:141] libmachine: (old-k8s-version-620467) KVM machine creation complete!
	I0819 13:10:59.741032 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetConfigRaw
	I0819 13:10:59.762985 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:10:59.763306 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:10:59.763528 2144521 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 13:10:59.763550 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetState
	I0819 13:10:59.765281 2144521 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 13:10:59.765297 2144521 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 13:10:59.765311 2144521 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 13:10:59.765318 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:10:59.768038 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:59.768797 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:10:59.769182 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:59.769437 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:10:59.769659 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:10:59.769851 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:10:59.770013 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:10:59.770215 2144521 main.go:141] libmachine: Using SSH client type: native
	I0819 13:10:59.770447 2144521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:10:59.770461 2144521 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 13:10:59.880402 2144521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:10:59.880434 2144521 main.go:141] libmachine: Detecting the provisioner...
	I0819 13:10:59.880446 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:10:59.884215 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:59.884570 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:10:59.884606 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:10:59.884767 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:10:59.885000 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:10:59.885156 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:10:59.885308 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:10:59.885514 2144521 main.go:141] libmachine: Using SSH client type: native
	I0819 13:10:59.885727 2144521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:10:59.885742 2144521 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 13:11:00.001273 2144521 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 13:11:00.001381 2144521 main.go:141] libmachine: found compatible host: buildroot
	I0819 13:11:00.001394 2144521 main.go:141] libmachine: Provisioning with buildroot...
	I0819 13:11:00.001402 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:11:00.001704 2144521 buildroot.go:166] provisioning hostname "old-k8s-version-620467"
	I0819 13:11:00.001736 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:11:00.001940 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:11:00.004756 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.005103 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:11:00.005126 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.005317 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:11:00.005517 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:11:00.005690 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:11:00.005905 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:11:00.006059 2144521 main.go:141] libmachine: Using SSH client type: native
	I0819 13:11:00.006236 2144521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:11:00.006248 2144521 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-620467 && echo "old-k8s-version-620467" | sudo tee /etc/hostname
	I0819 13:11:00.131938 2144521 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-620467
	
	I0819 13:11:00.131985 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:11:00.135060 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.135411 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:11:00.135444 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.135611 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:11:00.135799 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:11:00.135966 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:11:00.136129 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:11:00.136320 2144521 main.go:141] libmachine: Using SSH client type: native
	I0819 13:11:00.136567 2144521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:11:00.136592 2144521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-620467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-620467/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-620467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:11:00.254344 2144521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:11:00.254380 2144521 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:11:00.254425 2144521 buildroot.go:174] setting up certificates
	I0819 13:11:00.254437 2144521 provision.go:84] configureAuth start
	I0819 13:11:00.254451 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:11:00.254796 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:11:00.258161 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.258702 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:11:00.258736 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.258930 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:11:00.261599 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.261958 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:11:00.261990 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.262159 2144521 provision.go:143] copyHostCerts
	I0819 13:11:00.262237 2144521 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:11:00.262266 2144521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:11:00.262359 2144521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:11:00.262546 2144521 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:11:00.262564 2144521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:11:00.262602 2144521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:11:00.262702 2144521 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:11:00.262714 2144521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:11:00.262744 2144521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:11:00.262824 2144521 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-620467 san=[127.0.0.1 192.168.39.238 localhost minikube old-k8s-version-620467]
	I0819 13:11:00.385809 2144521 provision.go:177] copyRemoteCerts
	I0819 13:11:00.385885 2144521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:11:00.385913 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:11:00.389204 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.389626 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:11:00.389667 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.389860 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:11:00.390066 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:11:00.390225 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:11:00.390343 2144521 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:11:00.475338 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:11:00.501422 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 13:11:00.525221 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 13:11:00.548141 2144521 provision.go:87] duration metric: took 293.686161ms to configureAuth
	I0819 13:11:00.548192 2144521 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:11:00.548378 2144521 config.go:182] Loaded profile config "old-k8s-version-620467": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 13:11:00.548481 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:11:00.551103 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.551418 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:11:00.551455 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.551626 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:11:00.551821 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:11:00.552004 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:11:00.552199 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:11:00.552348 2144521 main.go:141] libmachine: Using SSH client type: native
	I0819 13:11:00.552537 2144521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:11:00.552551 2144521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:11:00.828105 2144521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:11:00.828137 2144521 main.go:141] libmachine: Checking connection to Docker...
	I0819 13:11:00.828149 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetURL
	I0819 13:11:00.829477 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using libvirt version 6000000
	I0819 13:11:00.831535 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.831913 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:11:00.831952 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.832177 2144521 main.go:141] libmachine: Docker is up and running!
	I0819 13:11:00.832196 2144521 main.go:141] libmachine: Reticulating splines...
	I0819 13:11:00.832205 2144521 client.go:171] duration metric: took 24.441164705s to LocalClient.Create
	I0819 13:11:00.832236 2144521 start.go:167] duration metric: took 24.441252312s to libmachine.API.Create "old-k8s-version-620467"
	I0819 13:11:00.832249 2144521 start.go:293] postStartSetup for "old-k8s-version-620467" (driver="kvm2")
	I0819 13:11:00.832260 2144521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:11:00.832277 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:11:00.832540 2144521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:11:00.832565 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:11:00.834966 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.835433 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:11:00.835460 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.835635 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:11:00.835842 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:11:00.836005 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:11:00.836134 2144521 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:11:00.918906 2144521 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:11:00.923178 2144521 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:11:00.923211 2144521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:11:00.923274 2144521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:11:00.923409 2144521 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:11:00.923524 2144521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:11:00.933195 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:11:00.958837 2144521 start.go:296] duration metric: took 126.571848ms for postStartSetup
	I0819 13:11:00.958901 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetConfigRaw
	I0819 13:11:00.959559 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:11:00.962312 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.962763 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:11:00.962797 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.963091 2144521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/config.json ...
	I0819 13:11:00.963286 2144521 start.go:128] duration metric: took 24.593728845s to createHost
	I0819 13:11:00.963311 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:11:00.965820 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.966177 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:11:00.966221 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:00.966311 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:11:00.966509 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:11:00.966675 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:11:00.966823 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:11:00.966982 2144521 main.go:141] libmachine: Using SSH client type: native
	I0819 13:11:00.967166 2144521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:11:00.967181 2144521 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:11:01.076962 2144521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073061.048147065
	
	I0819 13:11:01.076991 2144521 fix.go:216] guest clock: 1724073061.048147065
	I0819 13:11:01.077002 2144521 fix.go:229] Guest: 2024-08-19 13:11:01.048147065 +0000 UTC Remote: 2024-08-19 13:11:00.963297489 +0000 UTC m=+50.122875252 (delta=84.849576ms)
	I0819 13:11:01.077040 2144521 fix.go:200] guest clock delta is within tolerance: 84.849576ms
	I0819 13:11:01.077047 2144521 start.go:83] releasing machines lock for "old-k8s-version-620467", held for 24.707681748s
	I0819 13:11:01.077079 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:11:01.077398 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:11:01.080817 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:01.081261 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:11:01.081289 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:01.081500 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:11:01.082091 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:11:01.082306 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:11:01.082402 2144521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:11:01.082459 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:11:01.082551 2144521 ssh_runner.go:195] Run: cat /version.json
	I0819 13:11:01.082575 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:11:01.085576 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:01.085858 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:01.085895 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:11:01.085917 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:01.086076 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:11:01.086257 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:11:01.086287 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:11:01.086314 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:01.086431 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:11:01.086456 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:11:01.086613 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:11:01.086637 2144521 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:11:01.086743 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:11:01.086906 2144521 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:11:01.202563 2144521 ssh_runner.go:195] Run: systemctl --version
	I0819 13:11:01.209752 2144521 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:11:01.368653 2144521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:11:01.377368 2144521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:11:01.377450 2144521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:11:01.401680 2144521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:11:01.401705 2144521 start.go:495] detecting cgroup driver to use...
	I0819 13:11:01.401773 2144521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:11:01.423577 2144521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:11:01.438215 2144521 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:11:01.438291 2144521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:11:01.453684 2144521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:11:01.468067 2144521 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:11:01.593562 2144521 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:11:01.752475 2144521 docker.go:233] disabling docker service ...
	I0819 13:11:01.752556 2144521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:11:01.769409 2144521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:11:01.783520 2144521 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:11:01.934033 2144521 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:11:02.094564 2144521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:11:02.111284 2144521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:11:02.132775 2144521 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 13:11:02.132841 2144521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:11:02.144044 2144521 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:11:02.144121 2144521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:11:02.156893 2144521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:11:02.173117 2144521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:11:02.185344 2144521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:11:02.197987 2144521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:11:02.207857 2144521 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:11:02.207923 2144521 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:11:02.221236 2144521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:11:02.233688 2144521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:11:02.380412 2144521 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:11:02.564374 2144521 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:11:02.564443 2144521 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:11:02.573751 2144521 start.go:563] Will wait 60s for crictl version
	I0819 13:11:02.573804 2144521 ssh_runner.go:195] Run: which crictl
	I0819 13:11:02.582261 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:11:02.626760 2144521 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:11:02.626850 2144521 ssh_runner.go:195] Run: crio --version
	I0819 13:11:02.663270 2144521 ssh_runner.go:195] Run: crio --version
	I0819 13:11:02.701132 2144521 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 13:11:02.702109 2144521 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:11:02.705186 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:02.705591 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:10:52 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:11:02.705618 2144521 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:11:02.705871 2144521 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 13:11:02.711241 2144521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:11:02.728380 2144521 kubeadm.go:883] updating cluster {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:11:02.728505 2144521 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 13:11:02.728560 2144521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:11:02.771014 2144521 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:11:02.771101 2144521 ssh_runner.go:195] Run: which lz4
	I0819 13:11:02.775718 2144521 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:11:02.781163 2144521 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:11:02.781195 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 13:11:04.569838 2144521 crio.go:462] duration metric: took 1.794164327s to copy over tarball
	I0819 13:11:04.569917 2144521 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:11:07.541988 2144521 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.972042221s)
	I0819 13:11:07.542030 2144521 crio.go:469] duration metric: took 2.972162106s to extract the tarball
	I0819 13:11:07.542040 2144521 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:11:07.589187 2144521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:11:07.644814 2144521 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:11:07.644848 2144521 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:11:07.644938 2144521 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:11:07.644967 2144521 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:11:07.645003 2144521 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:11:07.644940 2144521 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:11:07.645073 2144521 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 13:11:07.644979 2144521 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:11:07.645256 2144521 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 13:11:07.644938 2144521 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:11:07.649568 2144521 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 13:11:07.649680 2144521 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:11:07.649548 2144521 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:11:07.649739 2144521 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:11:07.649846 2144521 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:11:07.649997 2144521 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 13:11:07.650016 2144521 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:11:07.650529 2144521 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:11:07.802106 2144521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 13:11:07.816632 2144521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:11:07.817280 2144521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 13:11:07.818664 2144521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:11:07.819569 2144521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:11:07.848273 2144521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:11:07.852891 2144521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 13:11:07.857882 2144521 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 13:11:07.857938 2144521 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 13:11:07.857987 2144521 ssh_runner.go:195] Run: which crictl
	I0819 13:11:07.911716 2144521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:11:07.993583 2144521 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 13:11:07.993691 2144521 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:11:07.993758 2144521 ssh_runner.go:195] Run: which crictl
	I0819 13:11:08.030548 2144521 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 13:11:08.031115 2144521 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 13:11:08.031160 2144521 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:11:08.031209 2144521 ssh_runner.go:195] Run: which crictl
	I0819 13:11:08.031311 2144521 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 13:11:08.031331 2144521 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:11:08.031371 2144521 ssh_runner.go:195] Run: which crictl
	I0819 13:11:08.031508 2144521 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:11:08.031543 2144521 ssh_runner.go:195] Run: which crictl
	I0819 13:11:08.051011 2144521 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 13:11:08.051070 2144521 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:11:08.051118 2144521 ssh_runner.go:195] Run: which crictl
	I0819 13:11:08.051217 2144521 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 13:11:08.051244 2144521 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 13:11:08.051265 2144521 ssh_runner.go:195] Run: which crictl
	I0819 13:11:08.051337 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:11:08.170839 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:11:08.170984 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:11:08.171049 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:11:08.171113 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:11:08.171226 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:11:08.171301 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:11:08.171380 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:11:08.355760 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:11:08.355833 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:11:08.355936 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:11:08.355954 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:11:08.356015 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:11:08.356092 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:11:08.356115 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:11:08.521649 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:11:08.521791 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:11:08.530764 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:11:08.530933 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:11:08.531039 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:11:08.531164 2144521 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 13:11:08.531348 2144521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:11:08.647045 2144521 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 13:11:08.663751 2144521 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 13:11:08.663823 2144521 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 13:11:08.663915 2144521 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 13:11:08.686425 2144521 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 13:11:08.686528 2144521 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 13:11:08.686566 2144521 cache_images.go:92] duration metric: took 1.041701275s to LoadCachedImages
	W0819 13:11:08.686645 2144521 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0819 13:11:08.686657 2144521 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.20.0 crio true true} ...
	I0819 13:11:08.686793 2144521 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-620467 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:11:08.686867 2144521 ssh_runner.go:195] Run: crio config
	I0819 13:11:08.736062 2144521 cni.go:84] Creating CNI manager for ""
	I0819 13:11:08.736086 2144521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:11:08.736098 2144521 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:11:08.736120 2144521 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-620467 NodeName:old-k8s-version-620467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 13:11:08.736296 2144521 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-620467"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:11:08.736361 2144521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 13:11:08.748174 2144521 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:11:08.748252 2144521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:11:08.759570 2144521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0819 13:11:08.777297 2144521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:11:08.796417 2144521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 13:11:08.817553 2144521 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0819 13:11:08.821787 2144521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:11:08.836442 2144521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:11:09.006930 2144521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:11:09.024435 2144521 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467 for IP: 192.168.39.238
	I0819 13:11:09.024466 2144521 certs.go:194] generating shared ca certs ...
	I0819 13:11:09.024489 2144521 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:11:09.024691 2144521 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:11:09.024752 2144521 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:11:09.024767 2144521 certs.go:256] generating profile certs ...
	I0819 13:11:09.024846 2144521 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.key
	I0819 13:11:09.024864 2144521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.crt with IP's: []
	I0819 13:11:09.138418 2144521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.crt ...
	I0819 13:11:09.138461 2144521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.crt: {Name:mkcd63956c5baca319ea20fc314b032a92fa0a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:11:09.138678 2144521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.key ...
	I0819 13:11:09.138696 2144521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.key: {Name:mk5d546432a6f3b0e38fb4f3fcdbd8037e0cbc7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:11:09.138829 2144521 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key.53723c14
	I0819 13:11:09.138852 2144521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt.53723c14 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238]
	I0819 13:11:09.284157 2144521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt.53723c14 ...
	I0819 13:11:09.284206 2144521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt.53723c14: {Name:mk5121e0e97e0f6eac9c9fa63f4478a65765c851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:11:09.284463 2144521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key.53723c14 ...
	I0819 13:11:09.284486 2144521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key.53723c14: {Name:mka4856bab4efb19cadab94fc1536e61fb65c3b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:11:09.284612 2144521 certs.go:381] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt.53723c14 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt
	I0819 13:11:09.284726 2144521 certs.go:385] copying /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key.53723c14 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key
	I0819 13:11:09.284818 2144521 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key
	I0819 13:11:09.284841 2144521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.crt with IP's: []
	I0819 13:11:09.348905 2144521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.crt ...
	I0819 13:11:09.348938 2144521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.crt: {Name:mk95c0904bf11ae613b448e10c407518d04ffccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:11:09.440643 2144521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key ...
	I0819 13:11:09.440731 2144521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key: {Name:mke98392c678cdf1745bb17320b717d3f8dd63b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:11:09.441080 2144521 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:11:09.441143 2144521 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:11:09.441165 2144521 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:11:09.441204 2144521 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:11:09.441236 2144521 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:11:09.441284 2144521 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:11:09.441360 2144521 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:11:09.442349 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:11:09.471583 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:11:09.498989 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:11:09.525971 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:11:09.553122 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 13:11:09.580758 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:11:09.626281 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:11:09.655685 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:11:09.690720 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:11:09.719160 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:11:09.745144 2144521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:11:09.770978 2144521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:11:09.789313 2144521 ssh_runner.go:195] Run: openssl version
	I0819 13:11:09.795574 2144521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:11:09.807362 2144521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:11:09.813247 2144521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:11:09.813330 2144521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:11:09.820316 2144521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:11:09.832779 2144521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:11:09.846948 2144521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:11:09.851828 2144521 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:11:09.851896 2144521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:11:09.857987 2144521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:11:09.872670 2144521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:11:09.887717 2144521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:11:09.897109 2144521 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:11:09.897174 2144521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:11:09.905630 2144521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:11:09.925676 2144521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:11:09.932317 2144521 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 13:11:09.932382 2144521 kubeadm.go:392] StartCluster: {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:11:09.932496 2144521 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:11:09.932577 2144521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:11:10.032470 2144521 cri.go:89] found id: ""
	I0819 13:11:10.032556 2144521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:11:10.049596 2144521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:11:10.061331 2144521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:11:10.073503 2144521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:11:10.073530 2144521 kubeadm.go:157] found existing configuration files:
	
	I0819 13:11:10.073595 2144521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:11:10.083952 2144521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:11:10.084054 2144521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:11:10.096372 2144521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:11:10.108843 2144521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:11:10.108903 2144521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:11:10.121650 2144521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:11:10.133702 2144521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:11:10.133757 2144521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:11:10.147012 2144521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:11:10.158108 2144521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:11:10.158173 2144521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:11:10.171646 2144521 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:11:10.347593 2144521 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:11:10.347707 2144521 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:11:10.548691 2144521 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:11:10.548865 2144521 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:11:10.549041 2144521 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:11:10.800207 2144521 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:11:10.807735 2144521 out.go:235]   - Generating certificates and keys ...
	I0819 13:11:10.807862 2144521 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:11:10.807946 2144521 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:11:11.224858 2144521 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 13:11:11.632391 2144521 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 13:11:11.743173 2144521 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 13:11:12.040900 2144521 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 13:11:12.165763 2144521 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 13:11:12.166117 2144521 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-620467] and IPs [192.168.39.238 127.0.0.1 ::1]
	I0819 13:11:12.364381 2144521 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 13:11:12.364575 2144521 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-620467] and IPs [192.168.39.238 127.0.0.1 ::1]
	I0819 13:11:12.659809 2144521 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 13:11:12.769071 2144521 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 13:11:12.916712 2144521 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 13:11:12.916777 2144521 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:11:13.188383 2144521 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:11:13.410505 2144521 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:11:13.658061 2144521 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:11:14.349804 2144521 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:11:14.375356 2144521 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:11:14.375654 2144521 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:11:14.375729 2144521 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:11:14.611141 2144521 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:11:14.612609 2144521 out.go:235]   - Booting up control plane ...
	I0819 13:11:14.612767 2144521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:11:14.633084 2144521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:11:14.633812 2144521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:11:14.639313 2144521 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:11:14.648767 2144521 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:11:54.642132 2144521 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:11:54.642796 2144521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:11:54.643043 2144521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:11:59.643606 2144521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:11:59.643879 2144521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:12:09.642917 2144521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:12:09.643197 2144521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:12:29.642685 2144521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:12:29.642992 2144521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:13:09.644512 2144521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:13:09.645173 2144521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:13:09.645198 2144521 kubeadm.go:310] 
	I0819 13:13:09.645324 2144521 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:13:09.645460 2144521 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:13:09.645487 2144521 kubeadm.go:310] 
	I0819 13:13:09.645560 2144521 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:13:09.645629 2144521 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:13:09.645848 2144521 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:13:09.645864 2144521 kubeadm.go:310] 
	I0819 13:13:09.646092 2144521 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:13:09.646170 2144521 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:13:09.646246 2144521 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:13:09.646255 2144521 kubeadm.go:310] 
	I0819 13:13:09.646502 2144521 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:13:09.646675 2144521 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:13:09.646687 2144521 kubeadm.go:310] 
	I0819 13:13:09.646911 2144521 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:13:09.647092 2144521 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:13:09.647262 2144521 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:13:09.647428 2144521 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:13:09.647458 2144521 kubeadm.go:310] 
	I0819 13:13:09.647905 2144521 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:13:09.648652 2144521 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:13:09.648806 2144521 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 13:13:09.648918 2144521 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-620467] and IPs [192.168.39.238 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-620467] and IPs [192.168.39.238 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-620467] and IPs [192.168.39.238 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-620467] and IPs [192.168.39.238 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 13:13:09.648971 2144521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:13:10.141056 2144521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:13:10.155553 2144521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:13:10.165640 2144521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:13:10.165660 2144521 kubeadm.go:157] found existing configuration files:
	
	I0819 13:13:10.165706 2144521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:13:10.176240 2144521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:13:10.176313 2144521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:13:10.187220 2144521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:13:10.197787 2144521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:13:10.197881 2144521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:13:10.208492 2144521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:13:10.218870 2144521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:13:10.218930 2144521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:13:10.229422 2144521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:13:10.238612 2144521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:13:10.238669 2144521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:13:10.247866 2144521 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:13:10.318229 2144521 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:13:10.318313 2144521 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:13:10.475208 2144521 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:13:10.475381 2144521 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:13:10.475559 2144521 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:13:10.692963 2144521 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:13:10.694437 2144521 out.go:235]   - Generating certificates and keys ...
	I0819 13:13:10.694555 2144521 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:13:10.694643 2144521 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:13:10.694752 2144521 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:13:10.694862 2144521 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:13:10.694959 2144521 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:13:10.695246 2144521 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:13:10.695673 2144521 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:13:10.696170 2144521 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:13:10.696711 2144521 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:13:10.697193 2144521 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:13:10.697379 2144521 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:13:10.697484 2144521 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:13:10.822045 2144521 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:13:11.013274 2144521 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:13:11.198491 2144521 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:13:11.256177 2144521 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:13:11.270916 2144521 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:13:11.271054 2144521 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:13:11.271118 2144521 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:13:11.408953 2144521 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:13:11.410663 2144521 out.go:235]   - Booting up control plane ...
	I0819 13:13:11.410771 2144521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:13:11.419276 2144521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:13:11.421901 2144521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:13:11.422637 2144521 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:13:11.429253 2144521 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:13:51.431735 2144521 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:13:51.431947 2144521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:13:51.432123 2144521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:13:56.432831 2144521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:13:56.433034 2144521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:14:06.433541 2144521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:14:06.433753 2144521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:14:26.432855 2144521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:14:26.433034 2144521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:15:06.433063 2144521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:15:06.433393 2144521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:15:06.433427 2144521 kubeadm.go:310] 
	I0819 13:15:06.433484 2144521 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:15:06.433544 2144521 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:15:06.433554 2144521 kubeadm.go:310] 
	I0819 13:15:06.433608 2144521 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:15:06.433654 2144521 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:15:06.433808 2144521 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:15:06.433843 2144521 kubeadm.go:310] 
	I0819 13:15:06.433962 2144521 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:15:06.434011 2144521 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:15:06.434069 2144521 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:15:06.434078 2144521 kubeadm.go:310] 
	I0819 13:15:06.434283 2144521 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:15:06.434355 2144521 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:15:06.434364 2144521 kubeadm.go:310] 
	I0819 13:15:06.434466 2144521 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:15:06.434542 2144521 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:15:06.434606 2144521 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:15:06.434713 2144521 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:15:06.434739 2144521 kubeadm.go:310] 
	I0819 13:15:06.435689 2144521 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:15:06.435801 2144521 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:15:06.435898 2144521 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 13:15:06.435993 2144521 kubeadm.go:394] duration metric: took 3m56.503620155s to StartCluster
	I0819 13:15:06.436058 2144521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:15:06.436138 2144521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:15:06.479572 2144521 cri.go:89] found id: ""
	I0819 13:15:06.479617 2144521 logs.go:276] 0 containers: []
	W0819 13:15:06.479627 2144521 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:15:06.479633 2144521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:15:06.479730 2144521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:15:06.515059 2144521 cri.go:89] found id: ""
	I0819 13:15:06.515108 2144521 logs.go:276] 0 containers: []
	W0819 13:15:06.515121 2144521 logs.go:278] No container was found matching "etcd"
	I0819 13:15:06.515129 2144521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:15:06.515192 2144521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:15:06.556349 2144521 cri.go:89] found id: ""
	I0819 13:15:06.556384 2144521 logs.go:276] 0 containers: []
	W0819 13:15:06.556395 2144521 logs.go:278] No container was found matching "coredns"
	I0819 13:15:06.556403 2144521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:15:06.556468 2144521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:15:06.592190 2144521 cri.go:89] found id: ""
	I0819 13:15:06.592228 2144521 logs.go:276] 0 containers: []
	W0819 13:15:06.592238 2144521 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:15:06.592246 2144521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:15:06.592308 2144521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:15:06.628013 2144521 cri.go:89] found id: ""
	I0819 13:15:06.628045 2144521 logs.go:276] 0 containers: []
	W0819 13:15:06.628057 2144521 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:15:06.628065 2144521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:15:06.628132 2144521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:15:06.664103 2144521 cri.go:89] found id: ""
	I0819 13:15:06.664132 2144521 logs.go:276] 0 containers: []
	W0819 13:15:06.664141 2144521 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:15:06.664149 2144521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:15:06.664222 2144521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:15:06.700725 2144521 cri.go:89] found id: ""
	I0819 13:15:06.700757 2144521 logs.go:276] 0 containers: []
	W0819 13:15:06.700765 2144521 logs.go:278] No container was found matching "kindnet"
	I0819 13:15:06.700776 2144521 logs.go:123] Gathering logs for container status ...
	I0819 13:15:06.700795 2144521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:15:06.741754 2144521 logs.go:123] Gathering logs for kubelet ...
	I0819 13:15:06.741800 2144521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:15:06.801613 2144521 logs.go:123] Gathering logs for dmesg ...
	I0819 13:15:06.801666 2144521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:15:06.823414 2144521 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:15:06.823445 2144521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:15:06.964872 2144521 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:15:06.964899 2144521 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:15:06.964918 2144521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0819 13:15:07.075674 2144521 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 13:15:07.075797 2144521 out.go:270] * 
	* 
	W0819 13:15:07.075880 2144521 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:15:07.075907 2144521 out.go:270] * 
	* 
	W0819 13:15:07.076758 2144521 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:15:07.080222 2144521 out.go:201] 
	W0819 13:15:07.082046 2144521 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:15:07.082096 2144521 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 13:15:07.082132 2144521 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 13:15:07.083481 2144521 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-620467 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467: exit status 6 (253.182075ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:15:07.375922 2151927 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-620467" does not appear in /home/jenkins/minikube-integration/19479-2086935/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-620467" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (296.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-441627 --alsologtostderr -v=3
E0819 13:12:44.027011 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:44.033432 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:44.045667 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:44.067179 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:44.108771 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:44.190241 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:44.351829 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:44.673698 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:45.315433 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:46.596806 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:49.158610 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-441627 --alsologtostderr -v=3: exit status 82 (2m0.879524086s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-441627"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:12:29.563476 2150134 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:12:29.563755 2150134 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:12:29.563765 2150134 out.go:358] Setting ErrFile to fd 2...
	I0819 13:12:29.563770 2150134 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:12:29.563966 2150134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:12:29.564275 2150134 out.go:352] Setting JSON to false
	I0819 13:12:29.564373 2150134 mustload.go:65] Loading cluster: no-preload-441627
	I0819 13:12:29.564708 2150134 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:12:29.564783 2150134 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/config.json ...
	I0819 13:12:29.564952 2150134 mustload.go:65] Loading cluster: no-preload-441627
	I0819 13:12:29.565079 2150134 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:12:29.565117 2150134 stop.go:39] StopHost: no-preload-441627
	I0819 13:12:29.565485 2150134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:12:29.565546 2150134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:12:29.583599 2150134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44457
	I0819 13:12:29.584085 2150134 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:12:29.584720 2150134 main.go:141] libmachine: Using API Version  1
	I0819 13:12:29.584745 2150134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:12:29.585215 2150134 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:12:29.588620 2150134 out.go:177] * Stopping node "no-preload-441627"  ...
	I0819 13:12:29.593215 2150134 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 13:12:29.593270 2150134 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:12:29.593618 2150134 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 13:12:29.593655 2150134 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:12:29.597996 2150134 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:12:29.598035 2150134 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:11:18 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:12:29.598058 2150134 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:12:29.598355 2150134 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:12:29.598574 2150134 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:12:29.598748 2150134 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:12:29.598942 2150134 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:12:29.723092 2150134 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 13:12:29.782788 2150134 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 13:12:29.849222 2150134 main.go:141] libmachine: Stopping "no-preload-441627"...
	I0819 13:12:29.849272 2150134 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:12:29.851408 2150134 main.go:141] libmachine: (no-preload-441627) Calling .Stop
	I0819 13:12:29.856298 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 0/120
	I0819 13:12:30.857809 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 1/120
	I0819 13:12:31.859354 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 2/120
	I0819 13:12:32.860832 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 3/120
	I0819 13:12:33.862341 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 4/120
	I0819 13:12:34.864429 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 5/120
	I0819 13:12:35.865962 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 6/120
	I0819 13:12:36.867321 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 7/120
	I0819 13:12:37.869167 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 8/120
	I0819 13:12:38.871140 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 9/120
	I0819 13:12:39.873441 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 10/120
	I0819 13:12:40.874688 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 11/120
	I0819 13:12:41.876501 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 12/120
	I0819 13:12:42.877875 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 13/120
	I0819 13:12:43.879386 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 14/120
	I0819 13:12:44.880920 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 15/120
	I0819 13:12:45.882937 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 16/120
	I0819 13:12:46.884114 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 17/120
	I0819 13:12:47.885632 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 18/120
	I0819 13:12:48.887024 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 19/120
	I0819 13:12:49.889301 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 20/120
	I0819 13:12:50.890619 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 21/120
	I0819 13:12:51.892008 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 22/120
	I0819 13:12:52.893454 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 23/120
	I0819 13:12:53.895053 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 24/120
	I0819 13:12:54.897631 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 25/120
	I0819 13:12:55.899068 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 26/120
	I0819 13:12:56.900531 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 27/120
	I0819 13:12:57.902323 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 28/120
	I0819 13:12:58.903806 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 29/120
	I0819 13:12:59.906020 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 30/120
	I0819 13:13:00.907402 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 31/120
	I0819 13:13:01.909057 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 32/120
	I0819 13:13:02.910627 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 33/120
	I0819 13:13:03.912305 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 34/120
	I0819 13:13:04.914715 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 35/120
	I0819 13:13:05.916076 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 36/120
	I0819 13:13:06.917727 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 37/120
	I0819 13:13:07.919320 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 38/120
	I0819 13:13:08.920835 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 39/120
	I0819 13:13:09.922916 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 40/120
	I0819 13:13:10.924477 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 41/120
	I0819 13:13:11.925792 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 42/120
	I0819 13:13:12.927519 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 43/120
	I0819 13:13:13.928855 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 44/120
	I0819 13:13:14.930943 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 45/120
	I0819 13:13:15.932250 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 46/120
	I0819 13:13:16.933593 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 47/120
	I0819 13:13:17.934938 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 48/120
	I0819 13:13:18.936585 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 49/120
	I0819 13:13:19.938992 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 50/120
	I0819 13:13:20.940785 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 51/120
	I0819 13:13:21.942724 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 52/120
	I0819 13:13:22.944314 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 53/120
	I0819 13:13:23.945477 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 54/120
	I0819 13:13:24.947767 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 55/120
	I0819 13:13:25.949099 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 56/120
	I0819 13:13:26.950696 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 57/120
	I0819 13:13:27.952411 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 58/120
	I0819 13:13:28.953866 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 59/120
	I0819 13:13:29.956203 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 60/120
	I0819 13:13:30.957904 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 61/120
	I0819 13:13:31.959471 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 62/120
	I0819 13:13:32.961252 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 63/120
	I0819 13:13:33.962656 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 64/120
	I0819 13:13:34.964495 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 65/120
	I0819 13:13:35.965880 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 66/120
	I0819 13:13:36.967195 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 67/120
	I0819 13:13:38.073721 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 68/120
	I0819 13:13:39.075306 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 69/120
	I0819 13:13:40.078053 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 70/120
	I0819 13:13:41.079652 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 71/120
	I0819 13:13:42.081219 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 72/120
	I0819 13:13:43.083154 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 73/120
	I0819 13:13:44.084732 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 74/120
	I0819 13:13:45.086317 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 75/120
	I0819 13:13:46.088279 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 76/120
	I0819 13:13:47.089936 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 77/120
	I0819 13:13:48.091553 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 78/120
	I0819 13:13:49.093120 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 79/120
	I0819 13:13:50.095280 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 80/120
	I0819 13:13:51.097422 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 81/120
	I0819 13:13:52.098884 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 82/120
	I0819 13:13:53.100526 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 83/120
	I0819 13:13:54.102654 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 84/120
	I0819 13:13:55.104695 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 85/120
	I0819 13:13:56.106597 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 86/120
	I0819 13:13:57.108234 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 87/120
	I0819 13:13:58.110003 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 88/120
	I0819 13:13:59.111531 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 89/120
	I0819 13:14:00.113729 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 90/120
	I0819 13:14:01.115041 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 91/120
	I0819 13:14:02.116517 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 92/120
	I0819 13:14:03.117931 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 93/120
	I0819 13:14:04.119312 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 94/120
	I0819 13:14:05.121163 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 95/120
	I0819 13:14:06.122602 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 96/120
	I0819 13:14:07.124116 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 97/120
	I0819 13:14:08.125609 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 98/120
	I0819 13:14:09.128119 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 99/120
	I0819 13:14:10.129547 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 100/120
	I0819 13:14:11.131012 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 101/120
	I0819 13:14:12.132554 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 102/120
	I0819 13:14:13.133987 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 103/120
	I0819 13:14:14.135477 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 104/120
	I0819 13:14:15.137182 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 105/120
	I0819 13:14:16.138670 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 106/120
	I0819 13:14:17.140073 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 107/120
	I0819 13:14:18.141616 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 108/120
	I0819 13:14:19.143074 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 109/120
	I0819 13:14:20.145389 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 110/120
	I0819 13:14:21.146649 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 111/120
	I0819 13:14:22.148088 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 112/120
	I0819 13:14:23.149684 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 113/120
	I0819 13:14:24.151161 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 114/120
	I0819 13:14:25.153381 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 115/120
	I0819 13:14:26.154777 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 116/120
	I0819 13:14:27.156248 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 117/120
	I0819 13:14:28.157767 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 118/120
	I0819 13:14:29.159290 2150134 main.go:141] libmachine: (no-preload-441627) Waiting for machine to stop 119/120
	I0819 13:14:30.159811 2150134 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 13:14:30.159903 2150134 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 13:14:30.161673 2150134 out.go:201] 
	W0819 13:14:30.162785 2150134 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 13:14:30.162802 2150134 out.go:270] * 
	* 
	W0819 13:14:30.391611 2150134 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:14:30.393749 2150134 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-441627 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-441627 -n no-preload-441627
E0819 13:14:32.774469 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:37.896897 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:40.836850 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:40.843265 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:40.854613 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:40.876056 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:40.917533 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:40.999019 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:41.160697 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:41.481983 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:42.123813 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:43.405494 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:45.967821 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:48.139060 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-441627 -n no-preload-441627: exit status 3 (18.608903008s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:14:49.004572 2151641 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.161:22: connect: no route to host
	E0819 13:14:49.004592 2151641 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.161:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-441627" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-038572 --alsologtostderr -v=3
E0819 13:13:12.373610 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:13:25.004220 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:13:32.855939 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-038572 --alsologtostderr -v=3: exit status 82 (2m0.517224177s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-038572"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:13:06.833571 2150639 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:13:06.833692 2150639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:13:06.833702 2150639 out.go:358] Setting ErrFile to fd 2...
	I0819 13:13:06.833706 2150639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:13:06.833924 2150639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:13:06.834186 2150639 out.go:352] Setting JSON to false
	I0819 13:13:06.834343 2150639 mustload.go:65] Loading cluster: default-k8s-diff-port-038572
	I0819 13:13:06.834735 2150639 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:13:06.834818 2150639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/config.json ...
	I0819 13:13:06.835003 2150639 mustload.go:65] Loading cluster: default-k8s-diff-port-038572
	I0819 13:13:06.835148 2150639 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:13:06.835184 2150639 stop.go:39] StopHost: default-k8s-diff-port-038572
	I0819 13:13:06.835602 2150639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:13:06.835645 2150639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:13:06.850430 2150639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46833
	I0819 13:13:06.850888 2150639 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:13:06.851418 2150639 main.go:141] libmachine: Using API Version  1
	I0819 13:13:06.851443 2150639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:13:06.851819 2150639 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:13:06.854002 2150639 out.go:177] * Stopping node "default-k8s-diff-port-038572"  ...
	I0819 13:13:06.855541 2150639 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 13:13:06.855600 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:13:06.855842 2150639 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 13:13:06.855870 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:13:06.858620 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:13:06.859024 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:11:47 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:13:06.859050 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:13:06.859171 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:13:06.859365 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:13:06.859492 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:13:06.859648 2150639 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:13:06.948687 2150639 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 13:13:07.008450 2150639 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 13:13:07.069999 2150639 main.go:141] libmachine: Stopping "default-k8s-diff-port-038572"...
	I0819 13:13:07.070042 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:13:07.072122 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Stop
	I0819 13:13:07.076051 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 0/120
	I0819 13:13:08.077607 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 1/120
	I0819 13:13:09.079213 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 2/120
	I0819 13:13:10.080903 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 3/120
	I0819 13:13:11.082832 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 4/120
	I0819 13:13:12.084737 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 5/120
	I0819 13:13:13.086149 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 6/120
	I0819 13:13:14.087656 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 7/120
	I0819 13:13:15.089161 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 8/120
	I0819 13:13:16.090574 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 9/120
	I0819 13:13:17.092239 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 10/120
	I0819 13:13:18.093790 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 11/120
	I0819 13:13:19.096252 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 12/120
	I0819 13:13:20.097752 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 13/120
	I0819 13:13:21.099363 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 14/120
	I0819 13:13:22.101708 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 15/120
	I0819 13:13:23.103473 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 16/120
	I0819 13:13:24.104946 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 17/120
	I0819 13:13:25.106945 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 18/120
	I0819 13:13:26.108560 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 19/120
	I0819 13:13:27.111088 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 20/120
	I0819 13:13:28.112691 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 21/120
	I0819 13:13:29.114921 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 22/120
	I0819 13:13:30.116141 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 23/120
	I0819 13:13:31.117721 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 24/120
	I0819 13:13:32.120054 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 25/120
	I0819 13:13:33.121698 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 26/120
	I0819 13:13:34.123235 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 27/120
	I0819 13:13:35.124646 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 28/120
	I0819 13:13:36.126632 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 29/120
	I0819 13:13:37.128662 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 30/120
	I0819 13:13:38.130759 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 31/120
	I0819 13:13:39.132261 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 32/120
	I0819 13:13:40.133889 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 33/120
	I0819 13:13:41.135255 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 34/120
	I0819 13:13:42.137388 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 35/120
	I0819 13:13:43.138891 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 36/120
	I0819 13:13:44.141423 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 37/120
	I0819 13:13:45.142929 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 38/120
	I0819 13:13:46.144403 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 39/120
	I0819 13:13:47.146547 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 40/120
	I0819 13:13:48.148243 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 41/120
	I0819 13:13:49.149599 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 42/120
	I0819 13:13:50.151077 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 43/120
	I0819 13:13:51.152643 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 44/120
	I0819 13:13:52.154675 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 45/120
	I0819 13:13:53.156053 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 46/120
	I0819 13:13:54.157489 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 47/120
	I0819 13:13:55.158843 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 48/120
	I0819 13:13:56.160190 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 49/120
	I0819 13:13:57.162460 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 50/120
	I0819 13:13:58.163820 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 51/120
	I0819 13:13:59.165009 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 52/120
	I0819 13:14:00.166252 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 53/120
	I0819 13:14:01.167660 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 54/120
	I0819 13:14:02.169542 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 55/120
	I0819 13:14:03.170923 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 56/120
	I0819 13:14:04.172320 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 57/120
	I0819 13:14:05.174050 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 58/120
	I0819 13:14:06.175800 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 59/120
	I0819 13:14:07.178281 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 60/120
	I0819 13:14:08.180146 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 61/120
	I0819 13:14:09.181590 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 62/120
	I0819 13:14:10.183351 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 63/120
	I0819 13:14:11.184915 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 64/120
	I0819 13:14:12.186568 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 65/120
	I0819 13:14:13.187967 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 66/120
	I0819 13:14:14.189350 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 67/120
	I0819 13:14:15.191007 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 68/120
	I0819 13:14:16.192768 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 69/120
	I0819 13:14:17.194816 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 70/120
	I0819 13:14:18.196317 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 71/120
	I0819 13:14:19.197965 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 72/120
	I0819 13:14:20.199661 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 73/120
	I0819 13:14:21.201084 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 74/120
	I0819 13:14:22.203147 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 75/120
	I0819 13:14:23.205078 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 76/120
	I0819 13:14:24.206600 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 77/120
	I0819 13:14:25.208343 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 78/120
	I0819 13:14:26.210635 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 79/120
	I0819 13:14:27.212815 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 80/120
	I0819 13:14:28.214603 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 81/120
	I0819 13:14:29.215912 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 82/120
	I0819 13:14:30.217068 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 83/120
	I0819 13:14:31.218464 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 84/120
	I0819 13:14:32.220532 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 85/120
	I0819 13:14:33.222121 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 86/120
	I0819 13:14:34.223519 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 87/120
	I0819 13:14:35.224834 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 88/120
	I0819 13:14:36.226175 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 89/120
	I0819 13:14:37.228181 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 90/120
	I0819 13:14:38.229836 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 91/120
	I0819 13:14:39.231383 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 92/120
	I0819 13:14:40.232926 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 93/120
	I0819 13:14:41.234425 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 94/120
	I0819 13:14:42.236576 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 95/120
	I0819 13:14:43.237992 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 96/120
	I0819 13:14:44.239419 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 97/120
	I0819 13:14:45.240763 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 98/120
	I0819 13:14:46.242108 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 99/120
	I0819 13:14:47.244680 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 100/120
	I0819 13:14:48.246592 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 101/120
	I0819 13:14:49.248127 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 102/120
	I0819 13:14:50.249594 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 103/120
	I0819 13:14:51.251376 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 104/120
	I0819 13:14:52.253488 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 105/120
	I0819 13:14:53.255008 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 106/120
	I0819 13:14:54.256812 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 107/120
	I0819 13:14:55.259127 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 108/120
	I0819 13:14:56.260793 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 109/120
	I0819 13:14:57.262760 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 110/120
	I0819 13:14:58.264696 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 111/120
	I0819 13:14:59.266896 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 112/120
	I0819 13:15:00.268735 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 113/120
	I0819 13:15:01.270818 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 114/120
	I0819 13:15:02.273395 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 115/120
	I0819 13:15:03.275407 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 116/120
	I0819 13:15:04.277247 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 117/120
	I0819 13:15:05.278984 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 118/120
	I0819 13:15:06.280977 2150639 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for machine to stop 119/120
	I0819 13:15:07.282463 2150639 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 13:15:07.282536 2150639 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 13:15:07.284423 2150639 out.go:201] 
	W0819 13:15:07.286176 2150639 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 13:15:07.286199 2150639 out.go:270] * 
	* 
	W0819 13:15:07.304705 2150639 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:15:07.306414 2150639 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-038572 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038572 -n default-k8s-diff-port-038572
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038572 -n default-k8s-diff-port-038572: exit status 3 (18.56009218s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:15:25.868666 2151957 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.69:22: connect: no route to host
	E0819 13:15:25.868691 2151957 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.69:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-038572" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-441627 -n no-preload-441627
E0819 13:14:51.089906 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-441627 -n no-preload-441627: exit status 3 (3.167638013s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:14:52.172546 2151738 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.161:22: connect: no route to host
	E0819 13:14:52.172593 2151738 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.161:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-441627 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-441627 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.168091049s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.161:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-441627 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-441627 -n no-preload-441627
E0819 13:14:58.878957 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:58.885368 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:58.896917 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:58.918465 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:58.959985 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:59.041572 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:59.203866 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:59.525650 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-441627 -n no-preload-441627: exit status 3 (3.048626885s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:15:01.388559 2151818 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.161:22: connect: no route to host
	E0819 13:15:01.388600 2151818 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.161:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-441627" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-620467 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-620467 create -f testdata/busybox.yaml: exit status 1 (47.952793ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-620467" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-620467 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467: exit status 6 (225.865683ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:15:07.655977 2151997 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-620467" does not appear in /home/jenkins/minikube-integration/19479-2086935/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-620467" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467: exit status 6 (224.761937ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:15:07.880209 2152027 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-620467" does not appear in /home/jenkins/minikube-integration/19479-2086935/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-620467" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (92.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-620467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-620467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m32.075120404s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_7.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-620467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-620467 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-620467 describe deploy/metrics-server -n kube-system: exit status 1 (61.368937ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-620467" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-620467 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467: exit status 6 (237.644382ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:16:40.254871 2152647 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-620467" does not appear in /home/jenkins/minikube-integration/19479-2086935/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-620467" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (92.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-823470 --alsologtostderr -v=3
E0819 13:15:19.376056 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:21.814104 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:25.263844 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-823470 --alsologtostderr -v=3: exit status 82 (2m0.532934927s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-823470"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:15:09.367714 2152154 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:15:09.368266 2152154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:15:09.368299 2152154 out.go:358] Setting ErrFile to fd 2...
	I0819 13:15:09.368307 2152154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:15:09.368723 2152154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:15:09.369446 2152154 out.go:352] Setting JSON to false
	I0819 13:15:09.369542 2152154 mustload.go:65] Loading cluster: embed-certs-823470
	I0819 13:15:09.369887 2152154 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:15:09.369960 2152154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:15:09.370136 2152154 mustload.go:65] Loading cluster: embed-certs-823470
	I0819 13:15:09.370234 2152154 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:15:09.370278 2152154 stop.go:39] StopHost: embed-certs-823470
	I0819 13:15:09.370654 2152154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:15:09.370698 2152154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:15:09.387246 2152154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34817
	I0819 13:15:09.387714 2152154 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:15:09.388341 2152154 main.go:141] libmachine: Using API Version  1
	I0819 13:15:09.388372 2152154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:15:09.388753 2152154 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:15:09.390908 2152154 out.go:177] * Stopping node "embed-certs-823470"  ...
	I0819 13:15:09.391990 2152154 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 13:15:09.392040 2152154 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:15:09.392338 2152154 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 13:15:09.392363 2152154 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:15:09.395190 2152154 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:15:09.395635 2152154 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:13:53 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:15:09.395664 2152154 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:15:09.395837 2152154 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:15:09.396025 2152154 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:15:09.396278 2152154 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:15:09.396451 2152154 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:15:09.492418 2152154 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 13:15:09.551568 2152154 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 13:15:09.627327 2152154 main.go:141] libmachine: Stopping "embed-certs-823470"...
	I0819 13:15:09.627356 2152154 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:15:09.629144 2152154 main.go:141] libmachine: (embed-certs-823470) Calling .Stop
	I0819 13:15:09.632708 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 0/120
	I0819 13:15:10.634363 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 1/120
	I0819 13:15:11.635852 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 2/120
	I0819 13:15:12.637571 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 3/120
	I0819 13:15:13.638949 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 4/120
	I0819 13:15:14.641433 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 5/120
	I0819 13:15:15.643034 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 6/120
	I0819 13:15:16.644617 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 7/120
	I0819 13:15:17.646855 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 8/120
	I0819 13:15:18.648263 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 9/120
	I0819 13:15:19.649604 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 10/120
	I0819 13:15:20.651008 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 11/120
	I0819 13:15:21.652593 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 12/120
	I0819 13:15:22.654021 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 13/120
	I0819 13:15:23.655580 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 14/120
	I0819 13:15:24.657976 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 15/120
	I0819 13:15:25.659311 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 16/120
	I0819 13:15:26.661102 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 17/120
	I0819 13:15:27.662523 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 18/120
	I0819 13:15:28.664090 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 19/120
	I0819 13:15:29.665937 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 20/120
	I0819 13:15:30.667734 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 21/120
	I0819 13:15:31.669203 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 22/120
	I0819 13:15:32.670826 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 23/120
	I0819 13:15:33.672278 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 24/120
	I0819 13:15:34.674667 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 25/120
	I0819 13:15:35.676227 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 26/120
	I0819 13:15:36.677528 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 27/120
	I0819 13:15:37.679204 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 28/120
	I0819 13:15:38.681238 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 29/120
	I0819 13:15:39.683902 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 30/120
	I0819 13:15:40.685548 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 31/120
	I0819 13:15:41.687280 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 32/120
	I0819 13:15:42.688930 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 33/120
	I0819 13:15:43.690829 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 34/120
	I0819 13:15:44.693377 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 35/120
	I0819 13:15:45.695061 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 36/120
	I0819 13:15:46.696948 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 37/120
	I0819 13:15:47.698653 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 38/120
	I0819 13:15:48.700653 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 39/120
	I0819 13:15:49.703172 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 40/120
	I0819 13:15:50.704783 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 41/120
	I0819 13:15:51.706435 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 42/120
	I0819 13:15:52.707971 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 43/120
	I0819 13:15:53.709551 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 44/120
	I0819 13:15:54.712051 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 45/120
	I0819 13:15:55.713573 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 46/120
	I0819 13:15:56.715276 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 47/120
	I0819 13:15:57.716805 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 48/120
	I0819 13:15:58.718604 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 49/120
	I0819 13:15:59.721058 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 50/120
	I0819 13:16:00.722509 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 51/120
	I0819 13:16:01.724242 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 52/120
	I0819 13:16:02.725811 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 53/120
	I0819 13:16:03.727268 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 54/120
	I0819 13:16:04.729605 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 55/120
	I0819 13:16:05.731062 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 56/120
	I0819 13:16:06.732791 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 57/120
	I0819 13:16:07.734442 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 58/120
	I0819 13:16:08.735552 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 59/120
	I0819 13:16:09.738069 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 60/120
	I0819 13:16:10.739731 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 61/120
	I0819 13:16:11.741100 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 62/120
	I0819 13:16:12.742532 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 63/120
	I0819 13:16:13.744064 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 64/120
	I0819 13:16:14.746244 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 65/120
	I0819 13:16:15.747696 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 66/120
	I0819 13:16:16.749197 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 67/120
	I0819 13:16:17.750589 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 68/120
	I0819 13:16:18.751984 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 69/120
	I0819 13:16:19.753401 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 70/120
	I0819 13:16:20.754868 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 71/120
	I0819 13:16:21.756604 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 72/120
	I0819 13:16:22.758025 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 73/120
	I0819 13:16:23.759335 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 74/120
	I0819 13:16:24.761497 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 75/120
	I0819 13:16:25.763093 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 76/120
	I0819 13:16:26.764535 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 77/120
	I0819 13:16:27.766014 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 78/120
	I0819 13:16:28.767617 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 79/120
	I0819 13:16:29.769975 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 80/120
	I0819 13:16:30.771389 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 81/120
	I0819 13:16:31.772991 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 82/120
	I0819 13:16:32.774455 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 83/120
	I0819 13:16:33.776139 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 84/120
	I0819 13:16:34.778253 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 85/120
	I0819 13:16:35.779843 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 86/120
	I0819 13:16:36.781335 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 87/120
	I0819 13:16:37.782860 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 88/120
	I0819 13:16:38.784635 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 89/120
	I0819 13:16:39.786984 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 90/120
	I0819 13:16:40.788396 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 91/120
	I0819 13:16:41.790028 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 92/120
	I0819 13:16:42.791462 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 93/120
	I0819 13:16:43.793061 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 94/120
	I0819 13:16:44.795226 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 95/120
	I0819 13:16:45.797315 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 96/120
	I0819 13:16:46.798950 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 97/120
	I0819 13:16:47.800762 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 98/120
	I0819 13:16:48.802554 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 99/120
	I0819 13:16:49.804090 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 100/120
	I0819 13:16:50.805473 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 101/120
	I0819 13:16:51.806860 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 102/120
	I0819 13:16:52.808299 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 103/120
	I0819 13:16:53.810177 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 104/120
	I0819 13:16:54.812509 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 105/120
	I0819 13:16:55.813892 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 106/120
	I0819 13:16:56.815481 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 107/120
	I0819 13:16:57.817054 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 108/120
	I0819 13:16:58.818788 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 109/120
	I0819 13:16:59.820121 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 110/120
	I0819 13:17:00.821501 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 111/120
	I0819 13:17:01.822921 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 112/120
	I0819 13:17:02.824389 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 113/120
	I0819 13:17:03.825894 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 114/120
	I0819 13:17:04.828019 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 115/120
	I0819 13:17:05.829375 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 116/120
	I0819 13:17:06.830813 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 117/120
	I0819 13:17:07.832171 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 118/120
	I0819 13:17:08.833547 2152154 main.go:141] libmachine: (embed-certs-823470) Waiting for machine to stop 119/120
	I0819 13:17:09.834776 2152154 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 13:17:09.834850 2152154 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 13:17:09.836663 2152154 out.go:201] 
	W0819 13:17:09.837899 2152154 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 13:17:09.837918 2152154 out.go:270] * 
	* 
	W0819 13:17:09.855266 2152154 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:17:09.856827 2152154 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-823470 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-823470 -n embed-certs-823470
E0819 13:17:11.504763 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:17:24.697526 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-823470 -n embed-certs-823470: exit status 3 (18.634329973s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:17:28.492543 2152902 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.50:22: connect: no route to host
	E0819 13:17:28.492562 2152902 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.50:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-823470" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038572 -n default-k8s-diff-port-038572
E0819 13:15:27.888126 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038572 -n default-k8s-diff-port-038572: exit status 3 (3.167758377s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:15:29.036557 2152220 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.69:22: connect: no route to host
	E0819 13:15:29.036582 2152220 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.69:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-038572 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-038572 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.168513465s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.69:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-038572 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038572 -n default-k8s-diff-port-038572
E0819 13:15:35.739678 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038572 -n default-k8s-diff-port-038572: exit status 3 (3.047191297s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:15:38.252577 2152319 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.69:22: connect: no route to host
	E0819 13:15:38.252601 2152319 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.69:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-038572" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (699.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-620467 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0819 13:16:49.781956 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:17:01.276405 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-620467 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m35.862185429s)

                                                
                                                
-- stdout --
	* [old-k8s-version-620467] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-620467" primary control-plane node in "old-k8s-version-620467" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-620467" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:16:45.804310 2152780 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:16:45.804426 2152780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:16:45.804435 2152780 out.go:358] Setting ErrFile to fd 2...
	I0819 13:16:45.804439 2152780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:16:45.804627 2152780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:16:45.805129 2152780 out.go:352] Setting JSON to false
	I0819 13:16:45.806116 2152780 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":165556,"bootTime":1723907850,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:16:45.806176 2152780 start.go:139] virtualization: kvm guest
	I0819 13:16:45.808268 2152780 out.go:177] * [old-k8s-version-620467] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:16:45.809665 2152780 notify.go:220] Checking for updates...
	I0819 13:16:45.809677 2152780 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:16:45.810975 2152780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:16:45.812318 2152780 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:16:45.813691 2152780 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:16:45.814951 2152780 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:16:45.816181 2152780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:16:45.817923 2152780 config.go:182] Loaded profile config "old-k8s-version-620467": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 13:16:45.818341 2152780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:16:45.818382 2152780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:16:45.834511 2152780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I0819 13:16:45.834990 2152780 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:16:45.835606 2152780 main.go:141] libmachine: Using API Version  1
	I0819 13:16:45.835635 2152780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:16:45.836035 2152780 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:16:45.836256 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:16:45.838065 2152780 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 13:16:45.839384 2152780 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:16:45.839700 2152780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:16:45.839756 2152780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:16:45.855637 2152780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37681
	I0819 13:16:45.856081 2152780 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:16:45.856568 2152780 main.go:141] libmachine: Using API Version  1
	I0819 13:16:45.856589 2152780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:16:45.856931 2152780 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:16:45.857175 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:16:45.894663 2152780 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 13:16:45.896003 2152780 start.go:297] selected driver: kvm2
	I0819 13:16:45.896030 2152780 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:16:45.896214 2152780 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:16:45.897174 2152780 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:16:45.897279 2152780 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:16:45.913178 2152780 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:16:45.913536 2152780 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:16:45.913570 2152780 cni.go:84] Creating CNI manager for ""
	I0819 13:16:45.913581 2152780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:16:45.913621 2152780 start.go:340] cluster config:
	{Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:16:45.913737 2152780 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:16:45.915510 2152780 out.go:177] * Starting "old-k8s-version-620467" primary control-plane node in "old-k8s-version-620467" cluster
	I0819 13:16:45.916760 2152780 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 13:16:45.916804 2152780 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:16:45.916814 2152780 cache.go:56] Caching tarball of preloaded images
	I0819 13:16:45.916890 2152780 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:16:45.916899 2152780 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 13:16:45.917000 2152780 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/config.json ...
	I0819 13:16:45.917170 2152780 start.go:360] acquireMachinesLock for old-k8s-version-620467: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:19:56.801408 2152780 start.go:364] duration metric: took 3m10.884201023s to acquireMachinesLock for "old-k8s-version-620467"
	I0819 13:19:56.801488 2152780 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:56.801498 2152780 fix.go:54] fixHost starting: 
	I0819 13:19:56.802005 2152780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:56.802046 2152780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:56.819893 2152780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0819 13:19:56.820323 2152780 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:56.820785 2152780 main.go:141] libmachine: Using API Version  1
	I0819 13:19:56.820813 2152780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:56.821126 2152780 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:56.821314 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:19:56.821452 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetState
	I0819 13:19:56.823015 2152780 fix.go:112] recreateIfNeeded on old-k8s-version-620467: state=Stopped err=<nil>
	I0819 13:19:56.823056 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	W0819 13:19:56.823300 2152780 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:56.824822 2152780 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-620467" ...
	I0819 13:19:56.826004 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .Start
	I0819 13:19:56.826161 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring networks are active...
	I0819 13:19:56.826953 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network default is active
	I0819 13:19:56.827328 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network mk-old-k8s-version-620467 is active
	I0819 13:19:56.827789 2152780 main.go:141] libmachine: (old-k8s-version-620467) Getting domain xml...
	I0819 13:19:56.828485 2152780 main.go:141] libmachine: (old-k8s-version-620467) Creating domain...
	I0819 13:19:58.099058 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting to get IP...
	I0819 13:19:58.099866 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.100256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.100359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.100246 2154107 retry.go:31] will retry after 217.468132ms: waiting for machine to come up
	I0819 13:19:58.319821 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.320335 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.320359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.320295 2154107 retry.go:31] will retry after 270.947251ms: waiting for machine to come up
	I0819 13:19:58.592897 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.593586 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.593610 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.593526 2154107 retry.go:31] will retry after 389.864812ms: waiting for machine to come up
	I0819 13:19:58.985269 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.985722 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.985757 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.985674 2154107 retry.go:31] will retry after 559.916587ms: waiting for machine to come up
	I0819 13:19:59.547846 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:59.548504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:59.548538 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:59.548446 2154107 retry.go:31] will retry after 759.198751ms: waiting for machine to come up
	I0819 13:20:00.309569 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:00.310093 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:00.310126 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:00.310041 2154107 retry.go:31] will retry after 836.488095ms: waiting for machine to come up
	I0819 13:20:01.148388 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:01.148845 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:01.148874 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:01.148801 2154107 retry.go:31] will retry after 1.045849266s: waiting for machine to come up
	I0819 13:20:02.196768 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:02.197307 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:02.197341 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:02.197243 2154107 retry.go:31] will retry after 1.09397017s: waiting for machine to come up
	I0819 13:20:03.292733 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:03.293195 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:03.293233 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:03.293141 2154107 retry.go:31] will retry after 1.29393597s: waiting for machine to come up
	I0819 13:20:04.588965 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:04.589429 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:04.589463 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:04.589356 2154107 retry.go:31] will retry after 1.963033352s: waiting for machine to come up
	I0819 13:20:06.553692 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:06.554176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:06.554208 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:06.554119 2154107 retry.go:31] will retry after 2.420122298s: waiting for machine to come up
	I0819 13:20:08.975682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:08.976129 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:08.976176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:08.976073 2154107 retry.go:31] will retry after 2.64333029s: waiting for machine to come up
	I0819 13:20:11.621327 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:11.621751 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:11.621783 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:11.621716 2154107 retry.go:31] will retry after 2.756598641s: waiting for machine to come up
	I0819 13:20:14.379562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380111 2152780 main.go:141] libmachine: (old-k8s-version-620467) Found IP for machine: 192.168.39.238
	I0819 13:20:14.380135 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserving static IP address...
	I0819 13:20:14.380149 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has current primary IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380616 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.380648 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserved static IP address: 192.168.39.238
	I0819 13:20:14.380673 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | skip adding static IP to network mk-old-k8s-version-620467 - found existing host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"}
	I0819 13:20:14.380687 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Getting to WaitForSSH function...
	I0819 13:20:14.380699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting for SSH to be available...
	I0819 13:20:14.383088 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383484 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.383516 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383650 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH client type: external
	I0819 13:20:14.383682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa (-rw-------)
	I0819 13:20:14.383720 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:14.383731 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | About to run SSH command:
	I0819 13:20:14.383745 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | exit 0
	I0819 13:20:14.508402 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:14.508766 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetConfigRaw
	I0819 13:20:14.509427 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.512053 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.512573 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512759 2152780 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/config.json ...
	I0819 13:20:14.512944 2152780 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:14.512962 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:14.513159 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.515266 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515554 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.515588 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515743 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.515901 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516039 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516142 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.516308 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.516582 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.516597 2152780 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:14.628611 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:14.628650 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.628968 2152780 buildroot.go:166] provisioning hostname "old-k8s-version-620467"
	I0819 13:20:14.629003 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.629202 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.631844 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632148 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.632211 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632516 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.632706 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632870 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632975 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.633140 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.633383 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.633400 2152780 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-620467 && echo "old-k8s-version-620467" | sudo tee /etc/hostname
	I0819 13:20:14.759803 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-620467
	
	I0819 13:20:14.759840 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.763058 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763513 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.763553 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763807 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.764027 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764208 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764523 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.764729 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.764986 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.765013 2152780 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-620467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-620467/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-620467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:14.890791 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:14.890828 2152780 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:14.890861 2152780 buildroot.go:174] setting up certificates
	I0819 13:20:14.890877 2152780 provision.go:84] configureAuth start
	I0819 13:20:14.890890 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.891337 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.894171 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894527 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.894562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894695 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.897240 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.897791 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.897825 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.898011 2152780 provision.go:143] copyHostCerts
	I0819 13:20:14.898080 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:14.898104 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:14.898169 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:14.898317 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:14.898330 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:14.898360 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:14.898434 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:14.898444 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:14.898472 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:14.898542 2152780 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-620467 san=[127.0.0.1 192.168.39.238 localhost minikube old-k8s-version-620467]
	I0819 13:20:15.009796 2152780 provision.go:177] copyRemoteCerts
	I0819 13:20:15.009880 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:15.009931 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.012857 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013153 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.013187 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013486 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.013699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.013862 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.013990 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.104066 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:15.131682 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 13:20:15.158717 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:15.186486 2152780 provision.go:87] duration metric: took 295.591794ms to configureAuth
	I0819 13:20:15.186518 2152780 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:15.186739 2152780 config.go:182] Loaded profile config "old-k8s-version-620467": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 13:20:15.186833 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.189580 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.189997 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.190028 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.190212 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.190389 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190528 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190646 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.190784 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.190958 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.190973 2152780 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:15.461836 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:15.461866 2152780 machine.go:96] duration metric: took 948.910153ms to provisionDockerMachine
	I0819 13:20:15.461878 2152780 start.go:293] postStartSetup for "old-k8s-version-620467" (driver="kvm2")
	I0819 13:20:15.461894 2152780 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:15.461913 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.462313 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:15.462348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.465294 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465707 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.465743 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465949 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.466136 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.466302 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.466443 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.551428 2152780 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:15.556041 2152780 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:15.556063 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:15.556137 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:15.556267 2152780 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:15.556410 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:15.565971 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:15.592103 2152780 start.go:296] duration metric: took 130.206992ms for postStartSetup
	I0819 13:20:15.592153 2152780 fix.go:56] duration metric: took 18.790655174s for fixHost
	I0819 13:20:15.592201 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.595223 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595583 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.595623 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595769 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.596033 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596249 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596391 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.596578 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.596767 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.596780 2152780 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:15.704773 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073615.677156491
	
	I0819 13:20:15.704802 2152780 fix.go:216] guest clock: 1724073615.677156491
	I0819 13:20:15.704811 2152780 fix.go:229] Guest: 2024-08-19 13:20:15.677156491 +0000 UTC Remote: 2024-08-19 13:20:15.592158206 +0000 UTC m=+209.822213223 (delta=84.998285ms)
	I0819 13:20:15.704854 2152780 fix.go:200] guest clock delta is within tolerance: 84.998285ms
	I0819 13:20:15.704862 2152780 start.go:83] releasing machines lock for "old-k8s-version-620467", held for 18.903410478s
	I0819 13:20:15.704893 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.705211 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:15.708190 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708528 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.708558 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708733 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709538 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709631 2152780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:15.709682 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.709793 2152780 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:15.709821 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.712366 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712759 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.712787 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712818 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712943 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713113 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713309 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.713374 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.713407 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.713481 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.713573 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713714 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713872 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.714033 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.818839 2152780 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:15.827091 2152780 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:15.989552 2152780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:15.995991 2152780 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:15.996067 2152780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:16.012594 2152780 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:16.012622 2152780 start.go:495] detecting cgroup driver to use...
	I0819 13:20:16.012691 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:16.030071 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:16.048651 2152780 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:16.048715 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:16.063500 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:16.078370 2152780 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:16.194353 2152780 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:16.347921 2152780 docker.go:233] disabling docker service ...
	I0819 13:20:16.347992 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:16.363528 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:16.377515 2152780 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:16.526660 2152780 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:16.672207 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:16.686323 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:16.707555 2152780 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 13:20:16.707634 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.718679 2152780 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:16.718775 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.729446 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.740145 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.751813 2152780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:16.767680 2152780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:16.779680 2152780 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:16.779754 2152780 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:16.798205 2152780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:16.812684 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:16.999666 2152780 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:17.159585 2152780 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:17.159673 2152780 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:17.165707 2152780 start.go:563] Will wait 60s for crictl version
	I0819 13:20:17.165781 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:17.170949 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:17.222170 2152780 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:17.222248 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.250627 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.278578 2152780 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 13:20:17.279632 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:17.282855 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:17.283289 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283501 2152780 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:17.288251 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:17.301778 2152780 kubeadm.go:883] updating cluster {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:17.301950 2152780 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 13:20:17.302022 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:17.348389 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:17.348477 2152780 ssh_runner.go:195] Run: which lz4
	I0819 13:20:17.352759 2152780 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:17.357046 2152780 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:17.357075 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 13:20:19.106587 2152780 crio.go:462] duration metric: took 1.753868264s to copy over tarball
	I0819 13:20:19.106686 2152780 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:22.098419 2152780 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.991694091s)
	I0819 13:20:22.098466 2152780 crio.go:469] duration metric: took 2.991838198s to extract the tarball
	I0819 13:20:22.098476 2152780 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:22.143215 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:22.183884 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:22.183917 2152780 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:22.183992 2152780 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.184012 2152780 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.184040 2152780 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 13:20:22.184050 2152780 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.183999 2152780 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.184002 2152780 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.184087 2152780 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.184133 2152780 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185819 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.185861 2152780 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 13:20:22.185868 2152780 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.185909 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.185940 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185825 2152780 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.355834 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.356969 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.361175 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.382402 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.393972 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.417644 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 13:20:22.451269 2152780 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 13:20:22.451335 2152780 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.451333 2152780 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 13:20:22.451374 2152780 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.451393 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.451428 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.479570 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.490369 2152780 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 13:20:22.490426 2152780 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.490484 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.519930 2152780 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 13:20:22.519990 2152780 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.520062 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.527794 2152780 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 13:20:22.527836 2152780 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.527922 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540287 2152780 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 13:20:22.540337 2152780 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 13:20:22.540362 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.540381 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540423 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665204 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.665271 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.665317 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.665410 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665441 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.665481 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.725932 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.821227 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.825977 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.826179 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.826276 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.826365 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.826517 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.908181 2152780 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 13:20:22.908249 2152780 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.908306 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.974136 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:23.008192 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 13:20:23.008282 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:23.008318 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:23.008372 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 13:20:23.008406 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:23.008416 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.041323 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 13:20:23.119173 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 13:20:23.119212 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.119216 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 13:20:23.119787 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 13:20:23.158596 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.193414 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 13:20:23.193510 2152780 cache_images.go:92] duration metric: took 1.009576721s to LoadCachedImages
	W0819 13:20:23.193613 2152780 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0819 13:20:23.193632 2152780 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.20.0 crio true true} ...
	I0819 13:20:23.193746 2152780 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-620467 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:23.193835 2152780 ssh_runner.go:195] Run: crio config
	I0819 13:20:23.255630 2152780 cni.go:84] Creating CNI manager for ""
	I0819 13:20:23.255661 2152780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:23.255677 2152780 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:23.255711 2152780 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-620467 NodeName:old-k8s-version-620467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 13:20:23.255897 2152780 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-620467"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:23.255980 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 13:20:23.269531 2152780 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:23.269612 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:23.282542 2152780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0819 13:20:23.302755 2152780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:23.322417 2152780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 13:20:23.342984 2152780 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:23.347053 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:23.360095 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:23.497495 2152780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:23.517984 2152780 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467 for IP: 192.168.39.238
	I0819 13:20:23.518059 2152780 certs.go:194] generating shared ca certs ...
	I0819 13:20:23.518125 2152780 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:23.518359 2152780 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:23.518429 2152780 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:23.518442 2152780 certs.go:256] generating profile certs ...
	I0819 13:20:23.518572 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.key
	I0819 13:20:23.518651 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key.53723c14
	I0819 13:20:23.518701 2152780 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key
	I0819 13:20:23.518856 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:23.518895 2152780 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:23.518909 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:23.518941 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:23.518969 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:23.519009 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:23.519073 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:23.519828 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:23.574690 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:23.616732 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:23.659531 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:23.706724 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 13:20:23.742394 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:23.774793 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:23.801731 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:23.827344 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:23.855338 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:23.880700 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:23.906324 2152780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:23.923792 2152780 ssh_runner.go:195] Run: openssl version
	I0819 13:20:23.929722 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:23.940623 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946214 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946316 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.952580 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:23.963550 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:23.974639 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979429 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979515 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.985465 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:23.996822 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:24.008876 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013845 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013909 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.019736 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:24.032573 2152780 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:24.038620 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:24.046070 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:24.052573 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:24.059242 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:24.065487 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:24.072116 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:24.078756 2152780 kubeadm.go:392] StartCluster: {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:24.078878 2152780 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:24.078950 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.122078 2152780 cri.go:89] found id: ""
	I0819 13:20:24.122170 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:24.133909 2152780 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:24.133932 2152780 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:24.133989 2152780 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:24.144440 2152780 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:24.145449 2152780 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-620467" does not appear in /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:24.146113 2152780 kubeconfig.go:62] /home/jenkins/minikube-integration/19479-2086935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-620467" cluster setting kubeconfig missing "old-k8s-version-620467" context setting]
	I0819 13:20:24.147113 2152780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:24.226327 2152780 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:24.236890 2152780 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.238
	I0819 13:20:24.236922 2152780 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:24.236936 2152780 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:24.236978 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.274144 2152780 cri.go:89] found id: ""
	I0819 13:20:24.274220 2152780 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:24.291049 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:24.302678 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:24.302699 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:24.302746 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:24.312055 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:24.312122 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:24.321518 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:24.330507 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:24.330570 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:24.340192 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.349855 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:24.349914 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.359243 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:24.369020 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:24.369100 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:24.379184 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:24.389785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:24.524009 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.159266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.413924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.519529 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.635520 2152780 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:25.635627 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:26.135764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:26.636618 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.136024 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.636046 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.135975 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.636411 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.136478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.636585 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.136183 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.636717 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:31.136686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:31.636239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.136372 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.635927 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.136039 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.635721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.136156 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.636069 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.136557 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.636478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.135939 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.136635 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.635748 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.136679 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.636322 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.136560 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.635772 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.135797 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.635764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:41.136326 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:41.635949 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.136274 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.636124 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.135998 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.636101 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.136242 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.635713 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.135796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.635702 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:46.135706 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:46.635815 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.136558 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.636410 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.136300 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.636143 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.136463 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.635682 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.135692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.635841 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.136307 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.136425 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.636034 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.136443 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.636129 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.135695 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.635860 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.136481 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.635655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:56.135714 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:56.635996 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.135847 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.635970 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.136650 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.636316 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.136506 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.636057 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.136545 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.636030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.136475 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.136128 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.636458 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.136471 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.636080 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.135958 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.636494 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.136279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.636642 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:06.136247 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:06.636648 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.136212 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.636710 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.136112 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.636655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.136349 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.635778 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.136689 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.635931 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:11.135937 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:11.636273 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.136428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.635907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.135893 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.636189 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.136600 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.636483 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.135856 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.635957 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.136030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.636437 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.135718 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.635876 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.136668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.636021 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.135933 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.636638 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.135892 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.636450 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:21.136149 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:21.635766 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.136148 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.135942 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.636570 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.135982 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.635967 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.136750 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.635977 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:25.636070 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:25.675163 2152780 cri.go:89] found id: ""
	I0819 13:21:25.675196 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.675208 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:25.675216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:25.675294 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:25.711676 2152780 cri.go:89] found id: ""
	I0819 13:21:25.711702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.711711 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:25.711717 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:25.711777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:25.748448 2152780 cri.go:89] found id: ""
	I0819 13:21:25.748479 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.748490 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:25.748499 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:25.748574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:25.786555 2152780 cri.go:89] found id: ""
	I0819 13:21:25.786584 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.786597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:25.786606 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:25.786661 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:25.819620 2152780 cri.go:89] found id: ""
	I0819 13:21:25.819654 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.819665 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:25.819672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:25.819734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:25.853303 2152780 cri.go:89] found id: ""
	I0819 13:21:25.853333 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.853342 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:25.853349 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:25.853401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:25.886285 2152780 cri.go:89] found id: ""
	I0819 13:21:25.886315 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.886325 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:25.886332 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:25.886389 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:25.921320 2152780 cri.go:89] found id: ""
	I0819 13:21:25.921347 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.921356 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:25.921367 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:25.921377 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:25.975400 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:25.975486 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:25.989715 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:25.989741 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:26.115378 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:26.115402 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:26.115415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:26.192370 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:26.192415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:28.739640 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:28.752652 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:28.752729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:28.791883 2152780 cri.go:89] found id: ""
	I0819 13:21:28.791910 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.791918 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:28.791925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:28.791977 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:28.829635 2152780 cri.go:89] found id: ""
	I0819 13:21:28.829668 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.829678 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:28.829686 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:28.829756 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:28.866768 2152780 cri.go:89] found id: ""
	I0819 13:21:28.866804 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.866817 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:28.866825 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:28.866897 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:28.901921 2152780 cri.go:89] found id: ""
	I0819 13:21:28.901954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.901963 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:28.901970 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:28.902025 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:28.945421 2152780 cri.go:89] found id: ""
	I0819 13:21:28.945449 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.945458 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:28.945468 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:28.945535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:28.982857 2152780 cri.go:89] found id: ""
	I0819 13:21:28.982893 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.982905 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:28.982918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:28.982978 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:29.020378 2152780 cri.go:89] found id: ""
	I0819 13:21:29.020415 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.020427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:29.020436 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:29.020502 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:29.055782 2152780 cri.go:89] found id: ""
	I0819 13:21:29.055817 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.055829 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:29.055841 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:29.055864 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:29.107722 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:29.107761 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:29.122304 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:29.122364 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:29.193379 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:29.193415 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:29.193435 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:29.271539 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:29.271578 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:31.819829 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:31.832354 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:31.832427 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:31.873620 2152780 cri.go:89] found id: ""
	I0819 13:21:31.873647 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.873656 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:31.873662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:31.873722 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:31.905857 2152780 cri.go:89] found id: ""
	I0819 13:21:31.905891 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.905903 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:31.905911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:31.905993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:31.942187 2152780 cri.go:89] found id: ""
	I0819 13:21:31.942215 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.942227 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:31.942235 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:31.942307 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:31.981439 2152780 cri.go:89] found id: ""
	I0819 13:21:31.981475 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.981486 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:31.981494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:31.981569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:32.019863 2152780 cri.go:89] found id: ""
	I0819 13:21:32.019900 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.019922 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:32.019934 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:32.020000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:32.055007 2152780 cri.go:89] found id: ""
	I0819 13:21:32.055035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.055044 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:32.055050 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:32.055117 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:32.092770 2152780 cri.go:89] found id: ""
	I0819 13:21:32.092809 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.092818 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:32.092824 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:32.092880 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:32.127000 2152780 cri.go:89] found id: ""
	I0819 13:21:32.127035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.127043 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:32.127053 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:32.127067 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:32.179979 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:32.180026 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:32.195377 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:32.195412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:32.269864 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:32.269887 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:32.269902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:32.348035 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:32.348072 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:34.886647 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:34.900410 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:34.900483 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:34.938558 2152780 cri.go:89] found id: ""
	I0819 13:21:34.938588 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.938596 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:34.938603 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:34.938660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:34.975968 2152780 cri.go:89] found id: ""
	I0819 13:21:34.976001 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.976012 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:34.976020 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:34.976105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:35.008967 2152780 cri.go:89] found id: ""
	I0819 13:21:35.008995 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.009003 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:35.009010 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:35.009079 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:35.045918 2152780 cri.go:89] found id: ""
	I0819 13:21:35.045957 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.045966 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:35.045973 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:35.046038 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:35.090605 2152780 cri.go:89] found id: ""
	I0819 13:21:35.090640 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.090652 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:35.090667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:35.090740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:35.152032 2152780 cri.go:89] found id: ""
	I0819 13:21:35.152066 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.152079 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:35.152088 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:35.152182 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:35.197900 2152780 cri.go:89] found id: ""
	I0819 13:21:35.197932 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.197947 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:35.197956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:35.198026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:35.233818 2152780 cri.go:89] found id: ""
	I0819 13:21:35.233853 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.233867 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:35.233884 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:35.233902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:35.316457 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:35.316501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:35.356449 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:35.356480 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:35.409307 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:35.409347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:35.423776 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:35.423807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:35.500868 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:38.001981 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:38.014908 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:38.014991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:38.051110 2152780 cri.go:89] found id: ""
	I0819 13:21:38.051141 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.051154 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:38.051162 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:38.051236 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:38.087476 2152780 cri.go:89] found id: ""
	I0819 13:21:38.087502 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.087511 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:38.087517 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:38.087575 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:38.122929 2152780 cri.go:89] found id: ""
	I0819 13:21:38.122961 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.122972 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:38.122980 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:38.123050 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:38.157352 2152780 cri.go:89] found id: ""
	I0819 13:21:38.157382 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.157394 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:38.157403 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:38.157468 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:38.194914 2152780 cri.go:89] found id: ""
	I0819 13:21:38.194953 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.194968 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:38.194974 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:38.195028 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:38.234036 2152780 cri.go:89] found id: ""
	I0819 13:21:38.234069 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.234078 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:38.234087 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:38.234155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:38.275759 2152780 cri.go:89] found id: ""
	I0819 13:21:38.275787 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.275798 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:38.275806 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:38.275867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:38.312718 2152780 cri.go:89] found id: ""
	I0819 13:21:38.312749 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.312759 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:38.312770 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:38.312785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:38.352032 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:38.352057 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:38.401877 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:38.401912 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:38.415899 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:38.415929 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:38.495576 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:38.495596 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:38.495609 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:41.073796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:41.095862 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:41.095935 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:41.155760 2152780 cri.go:89] found id: ""
	I0819 13:21:41.155790 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.155820 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:41.155828 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:41.155899 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:41.200108 2152780 cri.go:89] found id: ""
	I0819 13:21:41.200147 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.200177 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:41.200189 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:41.200263 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:41.233462 2152780 cri.go:89] found id: ""
	I0819 13:21:41.233494 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.233506 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:41.233514 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:41.233585 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:41.267089 2152780 cri.go:89] found id: ""
	I0819 13:21:41.267120 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.267133 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:41.267141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:41.267212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:41.302391 2152780 cri.go:89] found id: ""
	I0819 13:21:41.302423 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.302434 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:41.302443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:41.302509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:41.337812 2152780 cri.go:89] found id: ""
	I0819 13:21:41.337843 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.337855 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:41.337864 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:41.337939 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:41.373924 2152780 cri.go:89] found id: ""
	I0819 13:21:41.373954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.373961 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:41.373968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:41.374023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:41.407905 2152780 cri.go:89] found id: ""
	I0819 13:21:41.407950 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.407961 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:41.407974 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:41.407996 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:41.493650 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:41.493693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:41.532411 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:41.532456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:41.584613 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:41.584650 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:41.598337 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:41.598369 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:41.672154 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.172662 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:44.185663 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:44.185730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:44.220825 2152780 cri.go:89] found id: ""
	I0819 13:21:44.220863 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.220876 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:44.220885 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:44.220955 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:44.257261 2152780 cri.go:89] found id: ""
	I0819 13:21:44.257292 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.257300 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:44.257307 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:44.257369 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:44.292652 2152780 cri.go:89] found id: ""
	I0819 13:21:44.292690 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.292701 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:44.292709 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:44.292806 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:44.330064 2152780 cri.go:89] found id: ""
	I0819 13:21:44.330096 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.330108 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:44.330117 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:44.330184 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:44.366863 2152780 cri.go:89] found id: ""
	I0819 13:21:44.366895 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.366906 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:44.366915 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:44.366976 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:44.400335 2152780 cri.go:89] found id: ""
	I0819 13:21:44.400372 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.400385 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:44.400394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:44.400458 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:44.434381 2152780 cri.go:89] found id: ""
	I0819 13:21:44.434411 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.434422 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:44.434430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:44.434498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:44.470985 2152780 cri.go:89] found id: ""
	I0819 13:21:44.471013 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.471024 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:44.471036 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:44.471048 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:44.521552 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:44.521593 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:44.536197 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:44.536230 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:44.613040 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.613063 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:44.613077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:44.694600 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:44.694655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:47.231770 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:47.245480 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:47.245555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:47.282302 2152780 cri.go:89] found id: ""
	I0819 13:21:47.282334 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.282343 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:47.282351 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:47.282424 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:47.317791 2152780 cri.go:89] found id: ""
	I0819 13:21:47.317828 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.317839 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:47.317847 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:47.317914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:47.355999 2152780 cri.go:89] found id: ""
	I0819 13:21:47.356033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.356044 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:47.356052 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:47.356113 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:47.394652 2152780 cri.go:89] found id: ""
	I0819 13:21:47.394689 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.394702 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:47.394711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:47.394769 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:47.428563 2152780 cri.go:89] found id: ""
	I0819 13:21:47.428599 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.428612 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:47.428621 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:47.428698 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:47.462669 2152780 cri.go:89] found id: ""
	I0819 13:21:47.462702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.462712 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:47.462719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:47.462789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:47.496322 2152780 cri.go:89] found id: ""
	I0819 13:21:47.496349 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.496357 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:47.496366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:47.496419 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:47.527997 2152780 cri.go:89] found id: ""
	I0819 13:21:47.528033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.528045 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:47.528058 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:47.528077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:47.541514 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:47.541542 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:47.612804 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:47.612833 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:47.612851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.692815 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:47.692857 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:47.731057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:47.731092 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.286289 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:50.299376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:50.299457 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:50.335093 2152780 cri.go:89] found id: ""
	I0819 13:21:50.335123 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.335132 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:50.335138 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:50.335212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:50.369353 2152780 cri.go:89] found id: ""
	I0819 13:21:50.369378 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.369386 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:50.369394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:50.369459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:50.404319 2152780 cri.go:89] found id: ""
	I0819 13:21:50.404350 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.404360 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:50.404366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:50.404421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:50.441033 2152780 cri.go:89] found id: ""
	I0819 13:21:50.441064 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.441073 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:50.441080 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:50.441136 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:50.477438 2152780 cri.go:89] found id: ""
	I0819 13:21:50.477463 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.477471 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:50.477485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:50.477541 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:50.511785 2152780 cri.go:89] found id: ""
	I0819 13:21:50.511825 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.511838 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:50.511848 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:50.511925 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:50.545454 2152780 cri.go:89] found id: ""
	I0819 13:21:50.545487 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.545496 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:50.545502 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:50.545563 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:50.582022 2152780 cri.go:89] found id: ""
	I0819 13:21:50.582053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.582061 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:50.582071 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:50.582086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:50.625353 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:50.625384 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.688814 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:50.688851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:50.704225 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:50.704254 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:50.782736 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:50.782771 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:50.782789 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:53.365075 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:53.380049 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:53.380135 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:53.418911 2152780 cri.go:89] found id: ""
	I0819 13:21:53.418944 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.418956 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:53.418964 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:53.419020 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:53.458952 2152780 cri.go:89] found id: ""
	I0819 13:21:53.458986 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.458997 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:53.459005 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:53.459080 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:53.497219 2152780 cri.go:89] found id: ""
	I0819 13:21:53.497254 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.497265 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:53.497273 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:53.497360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:53.533624 2152780 cri.go:89] found id: ""
	I0819 13:21:53.533658 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.533671 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:53.533682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:53.533761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:53.567453 2152780 cri.go:89] found id: ""
	I0819 13:21:53.567490 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.567501 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:53.567509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:53.567574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:53.603438 2152780 cri.go:89] found id: ""
	I0819 13:21:53.603472 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.603482 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:53.603490 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:53.603570 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:53.637608 2152780 cri.go:89] found id: ""
	I0819 13:21:53.637648 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.637659 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:53.637667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:53.637742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:53.684476 2152780 cri.go:89] found id: ""
	I0819 13:21:53.684514 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.684524 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:53.684534 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:53.684550 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:53.699195 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:53.699227 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:53.766980 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:53.767000 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:53.767013 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:53.844890 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:53.844924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:53.888284 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:53.888318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:56.442668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:56.455936 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:56.456019 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:56.494363 2152780 cri.go:89] found id: ""
	I0819 13:21:56.494403 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.494414 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:56.494423 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:56.494498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:56.531901 2152780 cri.go:89] found id: ""
	I0819 13:21:56.531933 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.531943 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:56.531950 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:56.532046 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:56.569677 2152780 cri.go:89] found id: ""
	I0819 13:21:56.569706 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.569715 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:56.569721 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:56.569787 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:56.608109 2152780 cri.go:89] found id: ""
	I0819 13:21:56.608136 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.608147 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:56.608153 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:56.608240 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:56.646753 2152780 cri.go:89] found id: ""
	I0819 13:21:56.646791 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.646803 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:56.646810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:56.646879 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:56.686029 2152780 cri.go:89] found id: ""
	I0819 13:21:56.686053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.686061 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:56.686067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:56.686131 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:56.724838 2152780 cri.go:89] found id: ""
	I0819 13:21:56.724874 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.724886 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:56.724895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:56.724963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:56.762368 2152780 cri.go:89] found id: ""
	I0819 13:21:56.762399 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.762407 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:56.762422 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:56.762434 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:56.815047 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:56.815103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:56.830783 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:56.830809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:56.908963 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:56.908990 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:56.909008 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:56.991337 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:56.991372 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:59.532005 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:59.545733 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:59.545823 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:59.584082 2152780 cri.go:89] found id: ""
	I0819 13:21:59.584111 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.584121 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:59.584130 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:59.584216 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:59.618041 2152780 cri.go:89] found id: ""
	I0819 13:21:59.618075 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.618086 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:59.618094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:59.618171 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:59.651720 2152780 cri.go:89] found id: ""
	I0819 13:21:59.651753 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.651764 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:59.651772 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:59.651844 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:59.686447 2152780 cri.go:89] found id: ""
	I0819 13:21:59.686471 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.686479 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:59.686486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:59.686539 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:59.719583 2152780 cri.go:89] found id: ""
	I0819 13:21:59.719623 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.719635 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:59.719643 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:59.719714 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:59.753559 2152780 cri.go:89] found id: ""
	I0819 13:21:59.753593 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.753604 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:59.753613 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:59.753684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:59.789518 2152780 cri.go:89] found id: ""
	I0819 13:21:59.789554 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.789566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:59.789575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:59.789647 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:59.825772 2152780 cri.go:89] found id: ""
	I0819 13:21:59.825801 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.825812 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:59.825823 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:59.825837 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:59.878857 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:59.878899 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:59.894113 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:59.894146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:59.975274 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:59.975301 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:59.975318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:00.056121 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:00.056185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:02.596366 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:02.610365 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:02.610439 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:02.646703 2152780 cri.go:89] found id: ""
	I0819 13:22:02.646729 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.646737 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:02.646743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:02.646800 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:02.682311 2152780 cri.go:89] found id: ""
	I0819 13:22:02.682345 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.682357 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:02.682364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:02.682441 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:02.718494 2152780 cri.go:89] found id: ""
	I0819 13:22:02.718525 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.718545 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:02.718551 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:02.718608 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.751952 2152780 cri.go:89] found id: ""
	I0819 13:22:02.751988 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.751999 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:02.752008 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:02.752074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:02.786156 2152780 cri.go:89] found id: ""
	I0819 13:22:02.786189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.786197 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:02.786204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:02.786261 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:02.823080 2152780 cri.go:89] found id: ""
	I0819 13:22:02.823114 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.823126 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:02.823134 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:02.823207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:02.860266 2152780 cri.go:89] found id: ""
	I0819 13:22:02.860299 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.860311 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:02.860320 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:02.860377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:02.903612 2152780 cri.go:89] found id: ""
	I0819 13:22:02.903643 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.903652 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:02.903667 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:02.903680 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:02.984771 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:02.984813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:03.027346 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:03.027378 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:03.077562 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:03.077608 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:03.092135 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:03.092190 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:03.168418 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:05.669468 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:05.682640 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:05.682706 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:05.717755 2152780 cri.go:89] found id: ""
	I0819 13:22:05.717785 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.717796 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:05.717805 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:05.717867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:05.757165 2152780 cri.go:89] found id: ""
	I0819 13:22:05.757191 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.757199 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:05.757204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:05.757265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:05.794991 2152780 cri.go:89] found id: ""
	I0819 13:22:05.795023 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.795035 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:05.795044 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:05.795124 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:05.829634 2152780 cri.go:89] found id: ""
	I0819 13:22:05.829664 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.829673 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:05.829680 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:05.829734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:05.865739 2152780 cri.go:89] found id: ""
	I0819 13:22:05.865767 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.865775 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:05.865781 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:05.865839 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:05.901395 2152780 cri.go:89] found id: ""
	I0819 13:22:05.901425 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.901435 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:05.901443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:05.901641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:05.937162 2152780 cri.go:89] found id: ""
	I0819 13:22:05.937190 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.937200 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:05.937207 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:05.937284 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:05.974302 2152780 cri.go:89] found id: ""
	I0819 13:22:05.974332 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.974342 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:05.974353 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:05.974367 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:06.012635 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:06.012664 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:06.064093 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:06.064129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:06.077792 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:06.077825 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:06.148954 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:06.148980 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:06.148999 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:08.731539 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:08.747311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:08.747414 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:08.782881 2152780 cri.go:89] found id: ""
	I0819 13:22:08.782913 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.782921 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:08.782927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:08.782990 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:08.817533 2152780 cri.go:89] found id: ""
	I0819 13:22:08.817569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.817580 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:08.817587 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:08.817657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:08.855614 2152780 cri.go:89] found id: ""
	I0819 13:22:08.855650 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.855664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:08.855672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:08.855742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:08.896405 2152780 cri.go:89] found id: ""
	I0819 13:22:08.896439 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.896447 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:08.896455 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:08.896525 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:08.936250 2152780 cri.go:89] found id: ""
	I0819 13:22:08.936279 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.936288 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:08.936296 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:08.936366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:08.976325 2152780 cri.go:89] found id: ""
	I0819 13:22:08.976357 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.976367 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:08.976376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:08.976446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:09.012921 2152780 cri.go:89] found id: ""
	I0819 13:22:09.012953 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.012962 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:09.012967 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:09.013026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:09.051385 2152780 cri.go:89] found id: ""
	I0819 13:22:09.051420 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.051432 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:09.051444 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:09.051461 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:09.105101 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:09.105146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:09.120289 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:09.120331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:09.216182 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:09.216220 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:09.216238 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:09.302832 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:09.302876 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:11.848734 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:11.861754 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:11.861838 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:11.897908 2152780 cri.go:89] found id: ""
	I0819 13:22:11.897940 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.897949 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:11.897956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:11.898013 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:11.935276 2152780 cri.go:89] found id: ""
	I0819 13:22:11.935309 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.935318 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:11.935326 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:11.935386 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:11.978577 2152780 cri.go:89] found id: ""
	I0819 13:22:11.978608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.978619 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:11.978626 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:11.978697 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:12.018464 2152780 cri.go:89] found id: ""
	I0819 13:22:12.018496 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.018505 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:12.018511 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:12.018567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:12.062371 2152780 cri.go:89] found id: ""
	I0819 13:22:12.062402 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.062412 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:12.062418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:12.062476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:12.102988 2152780 cri.go:89] found id: ""
	I0819 13:22:12.103017 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.103026 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:12.103033 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:12.103088 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:12.143553 2152780 cri.go:89] found id: ""
	I0819 13:22:12.143581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.143591 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:12.143599 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:12.143660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:12.187155 2152780 cri.go:89] found id: ""
	I0819 13:22:12.187192 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.187211 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:12.187224 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:12.187251 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:12.236028 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:12.236069 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:12.250047 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:12.250084 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:12.320586 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:12.320612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:12.320624 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:12.400274 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:12.400320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:14.939735 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:14.953283 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:14.953348 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:14.993818 2152780 cri.go:89] found id: ""
	I0819 13:22:14.993851 2152780 logs.go:276] 0 containers: []
	W0819 13:22:14.993862 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:14.993870 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:14.993940 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:15.035966 2152780 cri.go:89] found id: ""
	I0819 13:22:15.035997 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.036005 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:15.036012 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:15.036073 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:15.070730 2152780 cri.go:89] found id: ""
	I0819 13:22:15.070765 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.070777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:15.070785 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:15.070855 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:15.111830 2152780 cri.go:89] found id: ""
	I0819 13:22:15.111859 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.111868 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:15.111874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:15.111928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:15.144013 2152780 cri.go:89] found id: ""
	I0819 13:22:15.144039 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.144047 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:15.144053 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:15.144107 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:15.178050 2152780 cri.go:89] found id: ""
	I0819 13:22:15.178074 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.178083 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:15.178090 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:15.178145 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:15.212622 2152780 cri.go:89] found id: ""
	I0819 13:22:15.212647 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.212656 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:15.212662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:15.212716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:15.244744 2152780 cri.go:89] found id: ""
	I0819 13:22:15.244772 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.244780 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:15.244789 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:15.244804 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:15.284232 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:15.284266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:15.336738 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:15.336774 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:15.350801 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:15.350835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:15.426424 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:15.426453 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:15.426469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:18.009087 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:18.022428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:18.022500 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:18.062900 2152780 cri.go:89] found id: ""
	I0819 13:22:18.062930 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.062941 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:18.062949 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:18.063021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:18.096711 2152780 cri.go:89] found id: ""
	I0819 13:22:18.096752 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.096764 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:18.096773 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:18.096846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:18.129199 2152780 cri.go:89] found id: ""
	I0819 13:22:18.129226 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.129246 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:18.129253 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:18.129323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:18.161907 2152780 cri.go:89] found id: ""
	I0819 13:22:18.161944 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.161955 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:18.161963 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:18.162036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:18.198514 2152780 cri.go:89] found id: ""
	I0819 13:22:18.198557 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.198570 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:18.198577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:18.198648 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:18.230114 2152780 cri.go:89] found id: ""
	I0819 13:22:18.230142 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.230152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:18.230160 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:18.230221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:18.265610 2152780 cri.go:89] found id: ""
	I0819 13:22:18.265642 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.265654 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:18.265660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:18.265729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:18.298889 2152780 cri.go:89] found id: ""
	I0819 13:22:18.298929 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.298941 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:18.298952 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:18.298967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:18.381124 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:18.381165 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:18.422361 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:18.422394 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:18.482182 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:18.482215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:18.496054 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:18.496086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:18.568459 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.069389 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:21.082568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:21.082636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:21.117124 2152780 cri.go:89] found id: ""
	I0819 13:22:21.117158 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.117171 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:21.117180 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:21.117272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:21.151954 2152780 cri.go:89] found id: ""
	I0819 13:22:21.151981 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.151990 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:21.151996 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:21.152047 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:21.187411 2152780 cri.go:89] found id: ""
	I0819 13:22:21.187443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.187452 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:21.187461 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:21.187528 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:21.224118 2152780 cri.go:89] found id: ""
	I0819 13:22:21.224148 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.224156 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:21.224173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:21.224247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:21.261153 2152780 cri.go:89] found id: ""
	I0819 13:22:21.261189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.261198 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:21.261205 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:21.261274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:21.305404 2152780 cri.go:89] found id: ""
	I0819 13:22:21.305433 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.305444 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:21.305453 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:21.305534 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:21.358428 2152780 cri.go:89] found id: ""
	I0819 13:22:21.358462 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.358474 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:21.358482 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:21.358561 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:21.410971 2152780 cri.go:89] found id: ""
	I0819 13:22:21.411005 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.411017 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:21.411029 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:21.411045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:21.462929 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:21.462963 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:21.476419 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:21.476446 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:21.544095 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.544116 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:21.544130 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:21.622802 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:21.622841 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.171988 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:24.185545 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:24.185624 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:24.221604 2152780 cri.go:89] found id: ""
	I0819 13:22:24.221632 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.221640 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:24.221646 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:24.221703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:24.255866 2152780 cri.go:89] found id: ""
	I0819 13:22:24.255903 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.255914 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:24.255923 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:24.255993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:24.290338 2152780 cri.go:89] found id: ""
	I0819 13:22:24.290381 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.290393 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:24.290401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:24.290465 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:24.325653 2152780 cri.go:89] found id: ""
	I0819 13:22:24.325685 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.325693 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:24.325700 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:24.325753 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:24.364081 2152780 cri.go:89] found id: ""
	I0819 13:22:24.364116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.364128 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:24.364136 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:24.364211 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:24.403464 2152780 cri.go:89] found id: ""
	I0819 13:22:24.403504 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.403516 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:24.403524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:24.403597 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:24.441205 2152780 cri.go:89] found id: ""
	I0819 13:22:24.441244 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.441256 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:24.441264 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:24.441340 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:24.478294 2152780 cri.go:89] found id: ""
	I0819 13:22:24.478324 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.478332 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:24.478341 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:24.478354 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:24.562969 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:24.563012 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.609296 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:24.609332 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:24.665537 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:24.665579 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:24.680086 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:24.680128 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:24.743465 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:27.244137 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:27.257619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:27.257695 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:27.290229 2152780 cri.go:89] found id: ""
	I0819 13:22:27.290265 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.290274 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:27.290280 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:27.290354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:27.327185 2152780 cri.go:89] found id: ""
	I0819 13:22:27.327216 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.327225 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:27.327231 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:27.327302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:27.362949 2152780 cri.go:89] found id: ""
	I0819 13:22:27.362978 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.362999 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:27.363007 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:27.363074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:27.404085 2152780 cri.go:89] found id: ""
	I0819 13:22:27.404116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.404125 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:27.404132 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:27.404237 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:27.439884 2152780 cri.go:89] found id: ""
	I0819 13:22:27.439922 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.439933 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:27.439942 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:27.440006 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:27.473933 2152780 cri.go:89] found id: ""
	I0819 13:22:27.473961 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.473970 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:27.473976 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:27.474033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:27.510093 2152780 cri.go:89] found id: ""
	I0819 13:22:27.510121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.510131 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:27.510137 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:27.510188 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:27.549178 2152780 cri.go:89] found id: ""
	I0819 13:22:27.549205 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.549213 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:27.549223 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:27.549236 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:27.605886 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:27.605925 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:27.619506 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:27.619537 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:27.686821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:27.686849 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:27.686867 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:27.761275 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:27.761315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:30.301382 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:30.315140 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:30.315221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:30.352687 2152780 cri.go:89] found id: ""
	I0819 13:22:30.352710 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.352717 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:30.352723 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:30.352775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:30.388577 2152780 cri.go:89] found id: ""
	I0819 13:22:30.388604 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.388612 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:30.388618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:30.388671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:30.423937 2152780 cri.go:89] found id: ""
	I0819 13:22:30.423967 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.423977 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:30.423983 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:30.424039 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:30.456789 2152780 cri.go:89] found id: ""
	I0819 13:22:30.456823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.456834 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:30.456852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:30.456928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:30.495741 2152780 cri.go:89] found id: ""
	I0819 13:22:30.495778 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.495792 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:30.495801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:30.495856 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:30.532919 2152780 cri.go:89] found id: ""
	I0819 13:22:30.532947 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.532959 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:30.532972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:30.533036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:30.574556 2152780 cri.go:89] found id: ""
	I0819 13:22:30.574591 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.574602 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:30.574615 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:30.574685 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:30.610248 2152780 cri.go:89] found id: ""
	I0819 13:22:30.610280 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.610291 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:30.610303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:30.610319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:30.664302 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:30.664356 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:30.678529 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:30.678571 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:30.753412 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:30.753441 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:30.753460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:30.831754 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:30.831795 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:33.375305 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:33.390064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:33.390139 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:33.426250 2152780 cri.go:89] found id: ""
	I0819 13:22:33.426281 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.426292 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:33.426301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:33.426356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:33.464229 2152780 cri.go:89] found id: ""
	I0819 13:22:33.464255 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.464263 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:33.464270 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:33.464329 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:33.501098 2152780 cri.go:89] found id: ""
	I0819 13:22:33.501134 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.501145 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:33.501154 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:33.501223 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:33.535617 2152780 cri.go:89] found id: ""
	I0819 13:22:33.535651 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.535662 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:33.535670 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:33.535738 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:33.570553 2152780 cri.go:89] found id: ""
	I0819 13:22:33.570600 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.570613 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:33.570622 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:33.570694 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:33.605414 2152780 cri.go:89] found id: ""
	I0819 13:22:33.605454 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.605466 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:33.605477 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:33.605545 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:33.637564 2152780 cri.go:89] found id: ""
	I0819 13:22:33.637599 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.637610 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:33.637618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:33.637681 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:33.671403 2152780 cri.go:89] found id: ""
	I0819 13:22:33.671438 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.671450 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:33.671463 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:33.671482 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:33.724121 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:33.724159 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:33.737041 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:33.737070 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:33.808305 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:33.808329 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:33.808342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:33.886089 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:33.886136 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:36.424035 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:36.439329 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:36.439407 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:36.486348 2152780 cri.go:89] found id: ""
	I0819 13:22:36.486375 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.486383 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:36.486391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:36.486455 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:36.522796 2152780 cri.go:89] found id: ""
	I0819 13:22:36.522823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.522834 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:36.522843 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:36.522909 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:36.556528 2152780 cri.go:89] found id: ""
	I0819 13:22:36.556561 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.556573 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:36.556580 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:36.556652 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:36.592545 2152780 cri.go:89] found id: ""
	I0819 13:22:36.592581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.592591 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:36.592600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:36.592671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:36.628668 2152780 cri.go:89] found id: ""
	I0819 13:22:36.628696 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.628705 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:36.628711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:36.628765 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:36.660783 2152780 cri.go:89] found id: ""
	I0819 13:22:36.660817 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.660825 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:36.660832 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:36.660886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:36.694853 2152780 cri.go:89] found id: ""
	I0819 13:22:36.694883 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.694893 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:36.694901 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:36.694969 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:36.727767 2152780 cri.go:89] found id: ""
	I0819 13:22:36.727797 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.727805 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:36.727816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:36.727830 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:36.741032 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:36.741064 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:36.816015 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:36.816045 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:36.816062 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:36.896272 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:36.896319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:36.933895 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:36.933936 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:39.495808 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:39.511071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:39.511158 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:39.551412 2152780 cri.go:89] found id: ""
	I0819 13:22:39.551447 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.551458 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:39.551467 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:39.551523 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:39.588796 2152780 cri.go:89] found id: ""
	I0819 13:22:39.588827 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.588836 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:39.588842 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:39.588904 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:39.627115 2152780 cri.go:89] found id: ""
	I0819 13:22:39.627144 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.627153 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:39.627159 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:39.627222 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:39.662372 2152780 cri.go:89] found id: ""
	I0819 13:22:39.662405 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.662413 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:39.662420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:39.662478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:39.696239 2152780 cri.go:89] found id: ""
	I0819 13:22:39.696272 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.696282 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:39.696298 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:39.696363 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:39.731334 2152780 cri.go:89] found id: ""
	I0819 13:22:39.731372 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.731383 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:39.731391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:39.731454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:39.764368 2152780 cri.go:89] found id: ""
	I0819 13:22:39.764400 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.764412 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:39.764420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:39.764488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:39.798994 2152780 cri.go:89] found id: ""
	I0819 13:22:39.799026 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.799038 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:39.799050 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:39.799066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:39.812935 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:39.812965 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:39.882867 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:39.882898 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:39.882913 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:39.968067 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:39.968102 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:40.007122 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:40.007158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:42.560811 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:42.574968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:42.575034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:42.609476 2152780 cri.go:89] found id: ""
	I0819 13:22:42.609516 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.609529 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:42.609537 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:42.609607 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:42.650763 2152780 cri.go:89] found id: ""
	I0819 13:22:42.650794 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.650804 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:42.650812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:42.650885 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:42.686330 2152780 cri.go:89] found id: ""
	I0819 13:22:42.686358 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.686369 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:42.686377 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:42.686446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:42.720937 2152780 cri.go:89] found id: ""
	I0819 13:22:42.720963 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.720971 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:42.720982 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:42.721036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:42.755079 2152780 cri.go:89] found id: ""
	I0819 13:22:42.755122 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.755132 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:42.755141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:42.755212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:42.787680 2152780 cri.go:89] found id: ""
	I0819 13:22:42.787715 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.787727 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:42.787738 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:42.787807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:42.824768 2152780 cri.go:89] found id: ""
	I0819 13:22:42.824799 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.824807 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:42.824813 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:42.824874 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:42.859902 2152780 cri.go:89] found id: ""
	I0819 13:22:42.859935 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.859946 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:42.859957 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:42.859972 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:42.911169 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:42.911215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:42.925108 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:42.925140 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:43.002506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:43.002557 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:43.002574 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:43.084408 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:43.084451 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:45.625360 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:45.638948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:45.639033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:45.675838 2152780 cri.go:89] found id: ""
	I0819 13:22:45.675874 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.675887 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:45.675895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:45.675971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:45.708726 2152780 cri.go:89] found id: ""
	I0819 13:22:45.708757 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.708768 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:45.708776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:45.708842 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:45.749704 2152780 cri.go:89] found id: ""
	I0819 13:22:45.749739 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.749752 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:45.749769 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:45.749835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:45.786426 2152780 cri.go:89] found id: ""
	I0819 13:22:45.786457 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.786469 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:45.786478 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:45.786547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:45.822363 2152780 cri.go:89] found id: ""
	I0819 13:22:45.822395 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.822403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:45.822408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:45.822472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:45.856308 2152780 cri.go:89] found id: ""
	I0819 13:22:45.856337 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.856346 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:45.856353 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:45.856421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:45.893379 2152780 cri.go:89] found id: ""
	I0819 13:22:45.893417 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.893427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:45.893435 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:45.893503 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:45.929841 2152780 cri.go:89] found id: ""
	I0819 13:22:45.929868 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.929879 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:45.929890 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:45.929943 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:45.980392 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:45.980432 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:45.994781 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:45.994809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:46.065633 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:46.065658 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:46.065675 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:46.140938 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:46.140976 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.680055 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:48.692972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:48.693053 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:48.736198 2152780 cri.go:89] found id: ""
	I0819 13:22:48.736232 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.736244 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:48.736252 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:48.736332 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:48.775819 2152780 cri.go:89] found id: ""
	I0819 13:22:48.775855 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.775872 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:48.775881 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:48.775956 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:48.814090 2152780 cri.go:89] found id: ""
	I0819 13:22:48.814121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.814130 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:48.814139 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:48.814207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:48.849630 2152780 cri.go:89] found id: ""
	I0819 13:22:48.849656 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.849665 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:48.849671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:48.849730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:48.887945 2152780 cri.go:89] found id: ""
	I0819 13:22:48.887982 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.887995 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:48.888003 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:48.888072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:48.924105 2152780 cri.go:89] found id: ""
	I0819 13:22:48.924145 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.924158 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:48.924184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:48.924266 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:48.960481 2152780 cri.go:89] found id: ""
	I0819 13:22:48.960521 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.960532 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:48.960540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:48.960611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:48.998826 2152780 cri.go:89] found id: ""
	I0819 13:22:48.998856 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.998865 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:48.998874 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:48.998888 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:49.053064 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:49.053103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:49.067115 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:49.067154 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:49.146731 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:49.146753 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:49.146767 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:49.232228 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:49.232268 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:51.775692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:51.789766 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:51.789848 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:51.830997 2152780 cri.go:89] found id: ""
	I0819 13:22:51.831029 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.831038 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:51.831045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:51.831105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:51.870122 2152780 cri.go:89] found id: ""
	I0819 13:22:51.870150 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.870160 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:51.870168 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:51.870247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:51.911170 2152780 cri.go:89] found id: ""
	I0819 13:22:51.911198 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.911206 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:51.911216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:51.911280 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:51.946159 2152780 cri.go:89] found id: ""
	I0819 13:22:51.946197 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.946218 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:51.946225 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:51.946300 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:51.981131 2152780 cri.go:89] found id: ""
	I0819 13:22:51.981160 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.981170 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:51.981176 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:51.981251 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:52.024417 2152780 cri.go:89] found id: ""
	I0819 13:22:52.024456 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.024477 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:52.024486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:52.024567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:52.062519 2152780 cri.go:89] found id: ""
	I0819 13:22:52.062552 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.062563 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:52.062571 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:52.062640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:52.106259 2152780 cri.go:89] found id: ""
	I0819 13:22:52.106291 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.106302 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:52.106314 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:52.106338 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:52.158681 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:52.158723 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:52.172489 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:52.172523 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:52.236862 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:52.236892 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:52.236910 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:52.319287 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:52.319331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:54.864061 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:54.878753 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:54.878834 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:54.913195 2152780 cri.go:89] found id: ""
	I0819 13:22:54.913225 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.913240 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:54.913247 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:54.913314 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:54.954867 2152780 cri.go:89] found id: ""
	I0819 13:22:54.954909 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.954922 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:54.954930 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:54.954995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:54.992570 2152780 cri.go:89] found id: ""
	I0819 13:22:54.992601 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.992610 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:54.992616 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:54.992683 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:55.027056 2152780 cri.go:89] found id: ""
	I0819 13:22:55.027090 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.027101 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:55.027108 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:55.027190 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:55.061941 2152780 cri.go:89] found id: ""
	I0819 13:22:55.061973 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.061985 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:55.061994 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:55.062061 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:55.100414 2152780 cri.go:89] found id: ""
	I0819 13:22:55.100443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.100455 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:55.100465 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:55.100535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:55.143797 2152780 cri.go:89] found id: ""
	I0819 13:22:55.143836 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.143848 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:55.143856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:55.143929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:55.187540 2152780 cri.go:89] found id: ""
	I0819 13:22:55.187569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.187577 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:55.187587 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:55.187600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:55.241075 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:55.241110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:55.254813 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:55.254846 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:55.325934 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:55.325955 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:55.325974 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:55.404725 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:55.404775 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:57.949501 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:57.964664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:57.964730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:58.005114 2152780 cri.go:89] found id: ""
	I0819 13:22:58.005153 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.005164 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:58.005173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:58.005274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:58.038596 2152780 cri.go:89] found id: ""
	I0819 13:22:58.038629 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.038639 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:58.038644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:58.038700 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:58.081597 2152780 cri.go:89] found id: ""
	I0819 13:22:58.081631 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.081640 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:58.081650 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:58.081735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:58.120356 2152780 cri.go:89] found id: ""
	I0819 13:22:58.120389 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.120400 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:58.120408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:58.120478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:58.156313 2152780 cri.go:89] found id: ""
	I0819 13:22:58.156346 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.156358 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:58.156366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:58.156442 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:58.192198 2152780 cri.go:89] found id: ""
	I0819 13:22:58.192238 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.192249 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:58.192257 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:58.192335 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:58.226574 2152780 cri.go:89] found id: ""
	I0819 13:22:58.226608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.226622 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:58.226631 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:58.226705 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:58.260060 2152780 cri.go:89] found id: ""
	I0819 13:22:58.260088 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.260096 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:58.260106 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:58.260119 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:58.338242 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:58.338289 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:58.382245 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:58.382286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:58.436273 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:58.436314 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:58.452143 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:58.452215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:58.530133 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:01.030732 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:01.044083 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:01.044155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:01.083082 2152780 cri.go:89] found id: ""
	I0819 13:23:01.083114 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.083122 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:01.083129 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:01.083197 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:01.122645 2152780 cri.go:89] found id: ""
	I0819 13:23:01.122680 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.122700 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:01.122708 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:01.122777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:01.163144 2152780 cri.go:89] found id: ""
	I0819 13:23:01.163175 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.163187 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:01.163194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:01.163262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:01.201545 2152780 cri.go:89] found id: ""
	I0819 13:23:01.201587 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.201597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:01.201605 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:01.201693 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:01.237047 2152780 cri.go:89] found id: ""
	I0819 13:23:01.237075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.237086 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:01.237094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:01.237162 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:01.276531 2152780 cri.go:89] found id: ""
	I0819 13:23:01.276560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.276571 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:01.276577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:01.276633 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:01.315639 2152780 cri.go:89] found id: ""
	I0819 13:23:01.315668 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.315676 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:01.315682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:01.315734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:01.351942 2152780 cri.go:89] found id: ""
	I0819 13:23:01.351969 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.351980 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:01.351990 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:01.352006 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:01.403068 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:01.403111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:01.416608 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:01.416639 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:01.493450 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:01.493483 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:01.493501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:01.577709 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:01.577750 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:04.146865 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:04.160366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:04.160431 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:04.194463 2152780 cri.go:89] found id: ""
	I0819 13:23:04.194494 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.194504 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:04.194510 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:04.194567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:04.228327 2152780 cri.go:89] found id: ""
	I0819 13:23:04.228351 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.228359 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:04.228364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:04.228413 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:04.261613 2152780 cri.go:89] found id: ""
	I0819 13:23:04.261645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.261656 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:04.261664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:04.261740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:04.294583 2152780 cri.go:89] found id: ""
	I0819 13:23:04.294617 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.294628 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:04.294636 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:04.294712 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:04.328537 2152780 cri.go:89] found id: ""
	I0819 13:23:04.328566 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.328576 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:04.328585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:04.328657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:04.361865 2152780 cri.go:89] found id: ""
	I0819 13:23:04.361906 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.361917 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:04.361925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:04.361988 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:04.396308 2152780 cri.go:89] found id: ""
	I0819 13:23:04.396343 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.396354 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:04.396361 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:04.396433 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:04.429551 2152780 cri.go:89] found id: ""
	I0819 13:23:04.429580 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.429591 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:04.429602 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:04.429618 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:04.479557 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:04.479600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:04.493746 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:04.493781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:04.563309 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:04.563333 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:04.563347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:04.644413 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:04.644455 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:07.195678 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:07.209255 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:07.209323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:07.245267 2152780 cri.go:89] found id: ""
	I0819 13:23:07.245295 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.245306 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:07.245324 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:07.245394 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:07.283190 2152780 cri.go:89] found id: ""
	I0819 13:23:07.283223 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.283234 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:07.283242 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:07.283312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:07.330025 2152780 cri.go:89] found id: ""
	I0819 13:23:07.330053 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.330065 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:07.330072 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:07.330137 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:07.381649 2152780 cri.go:89] found id: ""
	I0819 13:23:07.381675 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.381684 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:07.381690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:07.381744 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:07.436759 2152780 cri.go:89] found id: ""
	I0819 13:23:07.436794 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.436806 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:07.436814 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:07.436890 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:07.475843 2152780 cri.go:89] found id: ""
	I0819 13:23:07.475876 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.475885 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:07.475891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:07.475954 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:07.510531 2152780 cri.go:89] found id: ""
	I0819 13:23:07.510560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.510569 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:07.510575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:07.510641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:07.546041 2152780 cri.go:89] found id: ""
	I0819 13:23:07.546075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.546087 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:07.546100 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:07.546118 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:07.559102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:07.559137 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:07.631002 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:07.631030 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:07.631045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:07.705711 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:07.705751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:07.745114 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:07.745148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.302515 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:10.316028 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:10.316108 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:10.357431 2152780 cri.go:89] found id: ""
	I0819 13:23:10.357468 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.357477 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:10.357483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:10.357546 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:10.393541 2152780 cri.go:89] found id: ""
	I0819 13:23:10.393568 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.393577 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:10.393583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:10.393651 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:10.429651 2152780 cri.go:89] found id: ""
	I0819 13:23:10.429684 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.429692 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:10.429698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:10.429752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:10.469626 2152780 cri.go:89] found id: ""
	I0819 13:23:10.469690 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.469701 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:10.469706 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:10.469761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:10.503015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.503048 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.503058 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:10.503065 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:10.503133 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:10.538127 2152780 cri.go:89] found id: ""
	I0819 13:23:10.538168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.538180 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:10.538188 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:10.538260 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:10.573015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.573050 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.573058 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:10.573064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:10.573134 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:10.608094 2152780 cri.go:89] found id: ""
	I0819 13:23:10.608125 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.608136 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:10.608150 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:10.608185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:10.688926 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:10.688967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:10.724554 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:10.724588 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.777619 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:10.777652 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:10.791106 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:10.791143 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:10.865964 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.366960 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:13.381903 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:13.381973 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:13.420589 2152780 cri.go:89] found id: ""
	I0819 13:23:13.420620 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.420630 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:13.420637 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:13.420724 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:13.457809 2152780 cri.go:89] found id: ""
	I0819 13:23:13.457840 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.457850 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:13.457863 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:13.457927 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:13.496310 2152780 cri.go:89] found id: ""
	I0819 13:23:13.496347 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.496359 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:13.496368 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:13.496436 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:13.532057 2152780 cri.go:89] found id: ""
	I0819 13:23:13.532089 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.532097 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:13.532103 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:13.532183 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:13.575780 2152780 cri.go:89] found id: ""
	I0819 13:23:13.575814 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.575823 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:13.575829 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:13.575896 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:13.611484 2152780 cri.go:89] found id: ""
	I0819 13:23:13.611510 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.611520 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:13.611528 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:13.611594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:13.647456 2152780 cri.go:89] found id: ""
	I0819 13:23:13.647483 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.647491 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:13.647497 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:13.647569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:13.681181 2152780 cri.go:89] found id: ""
	I0819 13:23:13.681212 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.681224 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:13.681237 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:13.681266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:13.694680 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:13.694709 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:13.767546 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.767576 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:13.767597 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:13.851609 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:13.851649 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:13.893739 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:13.893778 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:16.446686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:16.459850 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:16.459928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:16.498671 2152780 cri.go:89] found id: ""
	I0819 13:23:16.498703 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.498712 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:16.498718 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:16.498775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:16.534655 2152780 cri.go:89] found id: ""
	I0819 13:23:16.534691 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.534704 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:16.534716 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:16.534781 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:16.571279 2152780 cri.go:89] found id: ""
	I0819 13:23:16.571317 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.571330 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:16.571337 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:16.571402 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:16.604843 2152780 cri.go:89] found id: ""
	I0819 13:23:16.604878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.604889 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:16.604897 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:16.604963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:16.641150 2152780 cri.go:89] found id: ""
	I0819 13:23:16.641181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.641190 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:16.641201 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:16.641272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:16.676737 2152780 cri.go:89] found id: ""
	I0819 13:23:16.676763 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.676773 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:16.676782 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:16.676847 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:16.709876 2152780 cri.go:89] found id: ""
	I0819 13:23:16.709908 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.709920 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:16.709928 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:16.709991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:16.745249 2152780 cri.go:89] found id: ""
	I0819 13:23:16.745277 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.745293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:16.745304 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:16.745320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:16.796364 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:16.796400 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:16.811859 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:16.811887 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:16.886118 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:16.886144 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:16.886158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.968304 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:16.968349 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.511448 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:19.523952 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:19.524023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:19.556661 2152780 cri.go:89] found id: ""
	I0819 13:23:19.556696 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.556708 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:19.556715 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:19.556784 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:19.595956 2152780 cri.go:89] found id: ""
	I0819 13:23:19.595986 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.595995 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:19.596001 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:19.596055 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:19.631993 2152780 cri.go:89] found id: ""
	I0819 13:23:19.632021 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.632030 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:19.632038 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:19.632126 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:19.667871 2152780 cri.go:89] found id: ""
	I0819 13:23:19.667924 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.667935 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:19.667944 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:19.668007 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:19.703585 2152780 cri.go:89] found id: ""
	I0819 13:23:19.703615 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.703624 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:19.703630 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:19.703684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:19.737220 2152780 cri.go:89] found id: ""
	I0819 13:23:19.737252 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.737263 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:19.737272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:19.737343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:19.775526 2152780 cri.go:89] found id: ""
	I0819 13:23:19.775570 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.775587 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:19.775595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:19.775671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:19.810611 2152780 cri.go:89] found id: ""
	I0819 13:23:19.810645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.810656 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:19.810667 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:19.810682 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.855138 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:19.855167 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:19.906988 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:19.907025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:19.920451 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:19.920481 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:19.989054 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:19.989086 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:19.989101 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:22.571624 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:22.585585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:22.585667 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:22.620456 2152780 cri.go:89] found id: ""
	I0819 13:23:22.620488 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.620497 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:22.620509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:22.620562 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:22.658971 2152780 cri.go:89] found id: ""
	I0819 13:23:22.659005 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.659017 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:22.659024 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:22.659094 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:22.696236 2152780 cri.go:89] found id: ""
	I0819 13:23:22.696269 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.696281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:22.696290 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:22.696354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:22.735260 2152780 cri.go:89] found id: ""
	I0819 13:23:22.735296 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.735306 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:22.735313 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:22.735365 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.770593 2152780 cri.go:89] found id: ""
	I0819 13:23:22.770625 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.770637 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:22.770644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:22.770716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:22.806348 2152780 cri.go:89] found id: ""
	I0819 13:23:22.806375 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.806400 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:22.806418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:22.806489 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:22.843812 2152780 cri.go:89] found id: ""
	I0819 13:23:22.843841 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.843850 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:22.843858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:22.843929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:22.880607 2152780 cri.go:89] found id: ""
	I0819 13:23:22.880641 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.880655 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:22.880669 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:22.880695 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:22.930816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:22.930855 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:22.946795 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:22.946829 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:23.021065 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:23.021094 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:23.021110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:23.098083 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:23.098129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:25.641104 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:25.654226 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:25.654297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:25.689403 2152780 cri.go:89] found id: ""
	I0819 13:23:25.689429 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.689437 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:25.689445 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:25.689496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:25.725340 2152780 cri.go:89] found id: ""
	I0819 13:23:25.725380 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.725393 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:25.725401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:25.725472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:25.759133 2152780 cri.go:89] found id: ""
	I0819 13:23:25.759168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.759178 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:25.759184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:25.759253 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:25.793605 2152780 cri.go:89] found id: ""
	I0819 13:23:25.793636 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.793649 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:25.793657 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:25.793731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:25.828355 2152780 cri.go:89] found id: ""
	I0819 13:23:25.828391 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.828403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:25.828411 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:25.828488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:25.869621 2152780 cri.go:89] found id: ""
	I0819 13:23:25.869658 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.869669 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:25.869676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:25.869731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:25.907749 2152780 cri.go:89] found id: ""
	I0819 13:23:25.907786 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.907795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:25.907801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:25.907870 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:25.942443 2152780 cri.go:89] found id: ""
	I0819 13:23:25.942472 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.942484 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:25.942497 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:25.942515 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:26.016016 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:26.016044 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:26.016058 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:26.097427 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:26.097475 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:26.140275 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:26.140320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:26.189858 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:26.189894 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:28.705027 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:28.717999 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:28.718071 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:28.752233 2152780 cri.go:89] found id: ""
	I0819 13:23:28.752263 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.752279 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:28.752286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:28.752343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:28.786033 2152780 cri.go:89] found id: ""
	I0819 13:23:28.786071 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.786080 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:28.786086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:28.786143 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:28.821105 2152780 cri.go:89] found id: ""
	I0819 13:23:28.821135 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.821165 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:28.821173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:28.821243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:28.859179 2152780 cri.go:89] found id: ""
	I0819 13:23:28.859226 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.859237 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:28.859244 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:28.859319 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:28.894213 2152780 cri.go:89] found id: ""
	I0819 13:23:28.894250 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.894263 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:28.894272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:28.894351 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:28.927807 2152780 cri.go:89] found id: ""
	I0819 13:23:28.927836 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.927848 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:28.927856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:28.927924 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:28.965430 2152780 cri.go:89] found id: ""
	I0819 13:23:28.965464 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.965476 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:28.965485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:28.965555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:29.008894 2152780 cri.go:89] found id: ""
	I0819 13:23:29.008920 2152780 logs.go:276] 0 containers: []
	W0819 13:23:29.008928 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:29.008938 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:29.008951 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:29.089617 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:29.089644 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:29.089657 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:29.173271 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:29.173310 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:29.216057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:29.216098 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:29.267076 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:29.267131 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:31.782574 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:31.797583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:31.797655 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:31.836889 2152780 cri.go:89] found id: ""
	I0819 13:23:31.836926 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.836939 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:31.836948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:31.837021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:31.875693 2152780 cri.go:89] found id: ""
	I0819 13:23:31.875725 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.875734 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:31.875740 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:31.875813 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:31.909847 2152780 cri.go:89] found id: ""
	I0819 13:23:31.909878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.909887 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:31.909895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:31.909971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:31.944122 2152780 cri.go:89] found id: ""
	I0819 13:23:31.944155 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.944184 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:31.944193 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:31.944265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:31.979342 2152780 cri.go:89] found id: ""
	I0819 13:23:31.979373 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.979383 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:31.979390 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:31.979461 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:32.013001 2152780 cri.go:89] found id: ""
	I0819 13:23:32.013049 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.013060 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:32.013067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:32.013148 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:32.047975 2152780 cri.go:89] found id: ""
	I0819 13:23:32.048003 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.048012 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:32.048017 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:32.048074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:32.082151 2152780 cri.go:89] found id: ""
	I0819 13:23:32.082181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.082192 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:32.082205 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:32.082241 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:32.120254 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:32.120286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.171594 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:32.171634 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:32.185654 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:32.185687 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:32.251542 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:32.251568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:32.251586 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:34.831302 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:34.846902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:34.846980 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:34.883697 2152780 cri.go:89] found id: ""
	I0819 13:23:34.883731 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.883747 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:34.883756 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:34.883821 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:34.918184 2152780 cri.go:89] found id: ""
	I0819 13:23:34.918220 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.918232 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:34.918240 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:34.918321 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:34.951621 2152780 cri.go:89] found id: ""
	I0819 13:23:34.951653 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.951664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:34.951673 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:34.951735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:34.987330 2152780 cri.go:89] found id: ""
	I0819 13:23:34.987362 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.987373 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:34.987381 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:34.987454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:35.022480 2152780 cri.go:89] found id: ""
	I0819 13:23:35.022518 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.022531 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:35.022539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:35.022606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:35.059717 2152780 cri.go:89] found id: ""
	I0819 13:23:35.059753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.059769 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:35.059776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:35.059835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:35.098877 2152780 cri.go:89] found id: ""
	I0819 13:23:35.098912 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.098924 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:35.098932 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:35.098995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:35.137392 2152780 cri.go:89] found id: ""
	I0819 13:23:35.137422 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.137429 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:35.137438 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:35.137456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:35.151134 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:35.151177 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:35.216821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:35.216852 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:35.216870 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:35.300662 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:35.300717 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:35.344627 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:35.344661 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:37.899152 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:37.913091 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:37.913161 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:37.949684 2152780 cri.go:89] found id: ""
	I0819 13:23:37.949713 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.949722 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:37.949728 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:37.949791 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:37.984480 2152780 cri.go:89] found id: ""
	I0819 13:23:37.984509 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.984518 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:37.984524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:37.984591 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:38.022228 2152780 cri.go:89] found id: ""
	I0819 13:23:38.022264 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.022277 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:38.022285 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:38.022360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:38.056117 2152780 cri.go:89] found id: ""
	I0819 13:23:38.056147 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.056159 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:38.056184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:38.056252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:38.089157 2152780 cri.go:89] found id: ""
	I0819 13:23:38.089187 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.089204 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:38.089212 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:38.089279 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:38.123335 2152780 cri.go:89] found id: ""
	I0819 13:23:38.123369 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.123378 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:38.123384 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:38.123444 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:38.159555 2152780 cri.go:89] found id: ""
	I0819 13:23:38.159584 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.159596 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:38.159604 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:38.159671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:38.200728 2152780 cri.go:89] found id: ""
	I0819 13:23:38.200765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.200777 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:38.200789 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:38.200809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:38.251652 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:38.251684 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:38.264834 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:38.264862 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:38.341259 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:38.341287 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:38.341304 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:38.417381 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:38.417423 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:40.959227 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:40.972845 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:40.972937 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:41.009180 2152780 cri.go:89] found id: ""
	I0819 13:23:41.009219 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.009233 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:41.009241 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:41.009313 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:41.044055 2152780 cri.go:89] found id: ""
	I0819 13:23:41.044088 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.044100 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:41.044109 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:41.044203 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:41.081073 2152780 cri.go:89] found id: ""
	I0819 13:23:41.081098 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.081108 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:41.081114 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:41.081166 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:41.113973 2152780 cri.go:89] found id: ""
	I0819 13:23:41.114011 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.114024 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:41.114031 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:41.114105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:41.149480 2152780 cri.go:89] found id: ""
	I0819 13:23:41.149515 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.149527 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:41.149539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:41.149632 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:41.183756 2152780 cri.go:89] found id: ""
	I0819 13:23:41.183788 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.183800 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:41.183808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:41.183884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:41.217629 2152780 cri.go:89] found id: ""
	I0819 13:23:41.217670 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.217683 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:41.217691 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:41.217755 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:41.254948 2152780 cri.go:89] found id: ""
	I0819 13:23:41.254987 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.255000 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:41.255011 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:41.255025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:41.293080 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:41.293116 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.349391 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:41.349437 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:41.363720 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:41.363751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:41.459273 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:41.459299 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:41.459315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.043239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:44.057981 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:44.058072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:44.091216 2152780 cri.go:89] found id: ""
	I0819 13:23:44.091247 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.091256 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:44.091263 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:44.091336 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:44.128729 2152780 cri.go:89] found id: ""
	I0819 13:23:44.128761 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.128772 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:44.128779 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:44.128865 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:44.167730 2152780 cri.go:89] found id: ""
	I0819 13:23:44.167765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.167777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:44.167784 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:44.167852 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:44.202323 2152780 cri.go:89] found id: ""
	I0819 13:23:44.202355 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.202366 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:44.202373 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:44.202451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:44.237030 2152780 cri.go:89] found id: ""
	I0819 13:23:44.237074 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.237088 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:44.237096 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:44.237164 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:44.273319 2152780 cri.go:89] found id: ""
	I0819 13:23:44.273356 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.273370 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:44.273378 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:44.273459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:44.313267 2152780 cri.go:89] found id: ""
	I0819 13:23:44.313300 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.313313 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:44.313322 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:44.313401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:44.353073 2152780 cri.go:89] found id: ""
	I0819 13:23:44.353099 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.353107 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:44.353115 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:44.353129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:44.366102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:44.366147 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:44.438580 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:44.438612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:44.438628 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.518200 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:44.518250 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:44.559362 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:44.559396 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:47.114604 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:47.128989 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:47.129067 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:47.174286 2152780 cri.go:89] found id: ""
	I0819 13:23:47.174316 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.174324 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:47.174331 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:47.174391 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:47.208244 2152780 cri.go:89] found id: ""
	I0819 13:23:47.208282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.208291 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:47.208297 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:47.208360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:47.245609 2152780 cri.go:89] found id: ""
	I0819 13:23:47.245638 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.245645 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:47.245651 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:47.245703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:47.280617 2152780 cri.go:89] found id: ""
	I0819 13:23:47.280646 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.280655 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:47.280661 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:47.280726 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:47.317662 2152780 cri.go:89] found id: ""
	I0819 13:23:47.317695 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.317708 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:47.317719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:47.317789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:47.359113 2152780 cri.go:89] found id: ""
	I0819 13:23:47.359143 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.359152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:47.359158 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:47.359243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:47.396520 2152780 cri.go:89] found id: ""
	I0819 13:23:47.396550 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.396561 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:47.396568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:47.396636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:47.433043 2152780 cri.go:89] found id: ""
	I0819 13:23:47.433075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.433086 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:47.433097 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:47.433111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:47.491984 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:47.492023 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:47.505775 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:47.505805 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:47.573541 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:47.573568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:47.573584 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:47.657102 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:47.657148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.201855 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:50.216186 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:50.216250 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:50.254505 2152780 cri.go:89] found id: ""
	I0819 13:23:50.254542 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.254554 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:50.254561 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:50.254627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:50.288971 2152780 cri.go:89] found id: ""
	I0819 13:23:50.289006 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.289015 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:50.289021 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:50.289086 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:50.325243 2152780 cri.go:89] found id: ""
	I0819 13:23:50.325282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.325293 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:50.325302 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:50.325377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:50.361776 2152780 cri.go:89] found id: ""
	I0819 13:23:50.361810 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.361822 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:50.361830 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:50.361892 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:50.395380 2152780 cri.go:89] found id: ""
	I0819 13:23:50.395412 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.395422 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:50.395428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:50.395496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:50.429299 2152780 cri.go:89] found id: ""
	I0819 13:23:50.429336 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.429348 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:50.429356 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:50.429423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:50.471214 2152780 cri.go:89] found id: ""
	I0819 13:23:50.471236 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.471243 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:50.471249 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:50.471302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:50.510258 2152780 cri.go:89] found id: ""
	I0819 13:23:50.510298 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.510309 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:50.510322 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:50.510342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.547398 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:50.547431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:50.598425 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:50.598466 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:50.613308 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:50.613341 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:50.685803 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:50.685834 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:50.685851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:53.264007 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:53.277505 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:53.277596 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:53.320407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.320436 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.320444 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:53.320450 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:53.320508 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:53.364578 2152780 cri.go:89] found id: ""
	I0819 13:23:53.364628 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.364650 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:53.364660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:53.364732 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:53.404407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.404437 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.404446 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:53.404452 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:53.404522 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:53.440743 2152780 cri.go:89] found id: ""
	I0819 13:23:53.440775 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.440784 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:53.440791 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:53.440858 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:53.476151 2152780 cri.go:89] found id: ""
	I0819 13:23:53.476200 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.476211 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:53.476217 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:53.476297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:53.512531 2152780 cri.go:89] found id: ""
	I0819 13:23:53.512559 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.512567 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:53.512573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:53.512640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:53.562487 2152780 cri.go:89] found id: ""
	I0819 13:23:53.562524 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.562535 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:53.562543 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:53.562617 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:53.627507 2152780 cri.go:89] found id: ""
	I0819 13:23:53.627545 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.627556 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:53.627566 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:53.627587 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:53.686224 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:53.686270 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:53.700659 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:53.700693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:53.776506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:53.776544 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:53.776564 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:53.855375 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:53.855412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:56.398105 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:56.411245 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:56.411312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:56.444097 2152780 cri.go:89] found id: ""
	I0819 13:23:56.444129 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.444138 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:56.444144 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:56.444214 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:56.480958 2152780 cri.go:89] found id: ""
	I0819 13:23:56.480993 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.481004 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:56.481013 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:56.481077 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:56.514657 2152780 cri.go:89] found id: ""
	I0819 13:23:56.514687 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.514696 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:56.514707 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:56.514762 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:56.548615 2152780 cri.go:89] found id: ""
	I0819 13:23:56.548647 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.548659 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:56.548667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:56.548735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:56.585801 2152780 cri.go:89] found id: ""
	I0819 13:23:56.585833 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.585844 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:56.585852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:56.585916 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:56.617804 2152780 cri.go:89] found id: ""
	I0819 13:23:56.617832 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.617843 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:56.617851 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:56.617919 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:56.655726 2152780 cri.go:89] found id: ""
	I0819 13:23:56.655753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.655761 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:56.655768 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:56.655820 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:56.691057 2152780 cri.go:89] found id: ""
	I0819 13:23:56.691087 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.691099 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:56.691111 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:56.691127 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:56.758836 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:56.758860 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:56.758879 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:56.836388 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:56.836427 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:56.875753 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:56.875781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:56.928126 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:56.928184 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.442721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:59.456581 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:59.456742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:59.492452 2152780 cri.go:89] found id: ""
	I0819 13:23:59.492480 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.492488 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:59.492494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:59.492544 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:59.527258 2152780 cri.go:89] found id: ""
	I0819 13:23:59.527291 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.527301 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:59.527311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:59.527366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:59.562712 2152780 cri.go:89] found id: ""
	I0819 13:23:59.562744 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.562757 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:59.562765 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:59.562836 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:59.597522 2152780 cri.go:89] found id: ""
	I0819 13:23:59.597557 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.597568 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:59.597577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:59.597649 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:59.638820 2152780 cri.go:89] found id: ""
	I0819 13:23:59.638852 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.638865 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:59.638873 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:59.638941 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:59.682623 2152780 cri.go:89] found id: ""
	I0819 13:23:59.682656 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.682667 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:59.682676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:59.682746 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:59.718731 2152780 cri.go:89] found id: ""
	I0819 13:23:59.718782 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.718795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:59.718808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:59.718881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:59.754799 2152780 cri.go:89] found id: ""
	I0819 13:23:59.754830 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.754840 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:59.754866 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:59.754890 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:59.807650 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:59.807683 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.821699 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:59.821727 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:59.901940 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:59.901962 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:59.901975 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:59.979019 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:59.979055 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:02.519279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:02.532541 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:02.532606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:02.566287 2152780 cri.go:89] found id: ""
	I0819 13:24:02.566327 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.566338 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:02.566347 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:02.566423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:02.599874 2152780 cri.go:89] found id: ""
	I0819 13:24:02.599907 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.599920 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:02.599927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:02.600000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:02.633609 2152780 cri.go:89] found id: ""
	I0819 13:24:02.633642 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.633662 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:02.633671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:02.633741 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:02.669105 2152780 cri.go:89] found id: ""
	I0819 13:24:02.669133 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.669142 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:02.669151 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:02.669245 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:02.704719 2152780 cri.go:89] found id: ""
	I0819 13:24:02.704756 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.704769 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:02.704777 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:02.704846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:02.744694 2152780 cri.go:89] found id: ""
	I0819 13:24:02.744726 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.744736 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:02.744743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:02.744818 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.779911 2152780 cri.go:89] found id: ""
	I0819 13:24:02.779945 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.779959 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:02.779969 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:02.780034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:02.815235 2152780 cri.go:89] found id: ""
	I0819 13:24:02.815277 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.815293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:02.815303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:02.815319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:02.864822 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:02.864852 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:02.879626 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:02.879655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:02.951475 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:02.951504 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:02.951522 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:03.031154 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:03.031196 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.573907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:05.586737 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:05.586798 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:05.623912 2152780 cri.go:89] found id: ""
	I0819 13:24:05.623946 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.623957 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:05.623965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:05.624037 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:05.659399 2152780 cri.go:89] found id: ""
	I0819 13:24:05.659427 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.659435 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:05.659441 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:05.659494 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:05.693173 2152780 cri.go:89] found id: ""
	I0819 13:24:05.693207 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.693219 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:05.693228 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:05.693289 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:05.725641 2152780 cri.go:89] found id: ""
	I0819 13:24:05.725670 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.725680 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:05.725688 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:05.725752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:05.759559 2152780 cri.go:89] found id: ""
	I0819 13:24:05.759593 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.759606 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:05.759614 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:05.759686 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:05.793022 2152780 cri.go:89] found id: ""
	I0819 13:24:05.793051 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.793063 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:05.793071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:05.793147 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:05.828997 2152780 cri.go:89] found id: ""
	I0819 13:24:05.829027 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.829037 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:05.829045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:05.829111 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:05.862852 2152780 cri.go:89] found id: ""
	I0819 13:24:05.862885 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.862895 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:05.862908 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:05.862924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.901295 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:05.901334 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:05.956106 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:05.956146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:05.971498 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:05.971525 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:06.040630 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:06.040653 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:06.040670 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:08.617393 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:08.631902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:08.631971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:08.669456 2152780 cri.go:89] found id: ""
	I0819 13:24:08.669512 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.669526 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:08.669539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:08.669630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:08.704825 2152780 cri.go:89] found id: ""
	I0819 13:24:08.704857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.704866 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:08.704874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:08.704933 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:08.738841 2152780 cri.go:89] found id: ""
	I0819 13:24:08.738879 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.738893 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:08.738902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:08.738961 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:08.775865 2152780 cri.go:89] found id: ""
	I0819 13:24:08.775899 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.775909 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:08.775918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:08.775981 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:08.813452 2152780 cri.go:89] found id: ""
	I0819 13:24:08.813482 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.813493 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:08.813501 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:08.813572 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:08.850638 2152780 cri.go:89] found id: ""
	I0819 13:24:08.850669 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.850681 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:08.850690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:08.850758 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:08.886750 2152780 cri.go:89] found id: ""
	I0819 13:24:08.886789 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.886801 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:08.886812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:08.886881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:08.922112 2152780 cri.go:89] found id: ""
	I0819 13:24:08.922145 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.922154 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:08.922164 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:08.922181 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:08.997014 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:08.997042 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:08.997059 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:09.074338 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:09.074388 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:09.114430 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:09.114460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:09.169581 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:09.169620 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:11.685453 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:11.698438 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:11.698509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:11.730814 2152780 cri.go:89] found id: ""
	I0819 13:24:11.730843 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.730852 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:11.730858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:11.730914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:11.768426 2152780 cri.go:89] found id: ""
	I0819 13:24:11.768460 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.768473 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:11.768483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:11.768555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:11.802718 2152780 cri.go:89] found id: ""
	I0819 13:24:11.802753 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.802765 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:11.802774 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:11.802841 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:11.838016 2152780 cri.go:89] found id: ""
	I0819 13:24:11.838058 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.838066 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:11.838073 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:11.838129 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:11.875869 2152780 cri.go:89] found id: ""
	I0819 13:24:11.875896 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.875904 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:11.875911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:11.875967 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:11.911538 2152780 cri.go:89] found id: ""
	I0819 13:24:11.911572 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.911580 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:11.911586 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:11.911640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:11.947762 2152780 cri.go:89] found id: ""
	I0819 13:24:11.947793 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.947803 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:11.947810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:11.947884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:11.989671 2152780 cri.go:89] found id: ""
	I0819 13:24:11.989706 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.989715 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:11.989726 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:11.989744 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:12.054707 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:12.054733 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:12.054749 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:12.137525 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:12.137572 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:12.175806 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:12.175834 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:12.227606 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:12.227651 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:14.743968 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:14.757178 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:14.757252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:14.791379 2152780 cri.go:89] found id: ""
	I0819 13:24:14.791409 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.791421 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:14.791430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:14.791510 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:14.826221 2152780 cri.go:89] found id: ""
	I0819 13:24:14.826250 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.826259 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:14.826265 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:14.826327 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:14.870239 2152780 cri.go:89] found id: ""
	I0819 13:24:14.870269 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.870281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:14.870345 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:14.870450 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:14.907473 2152780 cri.go:89] found id: ""
	I0819 13:24:14.907505 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.907515 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:14.907524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:14.907594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:14.942547 2152780 cri.go:89] found id: ""
	I0819 13:24:14.942585 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.942594 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:14.942600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:14.942658 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:14.982507 2152780 cri.go:89] found id: ""
	I0819 13:24:14.982536 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.982547 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:14.982555 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:14.982627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:15.020572 2152780 cri.go:89] found id: ""
	I0819 13:24:15.020604 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.020612 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:15.020619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:15.020691 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:15.057076 2152780 cri.go:89] found id: ""
	I0819 13:24:15.057105 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.057114 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:15.057123 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:15.057138 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:15.109111 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:15.109149 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:15.124049 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:15.124076 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:15.187413 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:15.187444 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:15.187460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:15.271009 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:15.271049 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:17.810205 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:17.823344 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:17.823417 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:17.862704 2152780 cri.go:89] found id: ""
	I0819 13:24:17.862734 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.862743 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:17.862750 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:17.862807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:17.900371 2152780 cri.go:89] found id: ""
	I0819 13:24:17.900404 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.900414 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:17.900420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:17.900476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:17.935518 2152780 cri.go:89] found id: ""
	I0819 13:24:17.935549 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.935559 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:17.935566 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:17.935622 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:17.972592 2152780 cri.go:89] found id: ""
	I0819 13:24:17.972624 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.972637 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:17.972647 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:17.972717 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:18.008636 2152780 cri.go:89] found id: ""
	I0819 13:24:18.008675 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.008686 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:18.008698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:18.008775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:18.040823 2152780 cri.go:89] found id: ""
	I0819 13:24:18.040857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.040866 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:18.040872 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:18.040934 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:18.079443 2152780 cri.go:89] found id: ""
	I0819 13:24:18.079472 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.079481 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:18.079488 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:18.079547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:18.118717 2152780 cri.go:89] found id: ""
	I0819 13:24:18.118746 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.118756 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:18.118766 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:18.118780 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:18.172588 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:18.172631 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:18.186427 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:18.186457 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:18.258534 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:18.258567 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:18.258585 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:18.335882 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:18.335922 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:20.878687 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:20.891965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:20.892042 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:20.926029 2152780 cri.go:89] found id: ""
	I0819 13:24:20.926057 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.926070 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:20.926077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:20.926142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:20.961025 2152780 cri.go:89] found id: ""
	I0819 13:24:20.961056 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.961067 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:20.961077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:20.961142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:20.993767 2152780 cri.go:89] found id: ""
	I0819 13:24:20.993800 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.993813 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:20.993821 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:20.993886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:21.025250 2152780 cri.go:89] found id: ""
	I0819 13:24:21.025285 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.025294 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:21.025301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:21.025356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:21.057832 2152780 cri.go:89] found id: ""
	I0819 13:24:21.057863 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.057871 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:21.057877 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:21.057930 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:21.092372 2152780 cri.go:89] found id: ""
	I0819 13:24:21.092400 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.092408 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:21.092419 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:21.092486 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:21.126138 2152780 cri.go:89] found id: ""
	I0819 13:24:21.126177 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.126187 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:21.126194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:21.126262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:21.157779 2152780 cri.go:89] found id: ""
	I0819 13:24:21.157806 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.157814 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:21.157823 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:21.157835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:21.240776 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:21.240813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:21.278705 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:21.278742 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:21.331086 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:21.331125 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:21.346399 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:21.346431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:21.424929 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:23.925325 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:23.938286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:23.938380 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:23.974038 2152780 cri.go:89] found id: ""
	I0819 13:24:23.974070 2152780 logs.go:276] 0 containers: []
	W0819 13:24:23.974079 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:23.974086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:23.974144 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:24.007828 2152780 cri.go:89] found id: ""
	I0819 13:24:24.007865 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.007876 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:24.007883 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:24.007951 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:24.042152 2152780 cri.go:89] found id: ""
	I0819 13:24:24.042182 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.042192 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:24.042198 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:24.042252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:24.076334 2152780 cri.go:89] found id: ""
	I0819 13:24:24.076366 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.076376 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:24.076382 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:24.076451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:24.116333 2152780 cri.go:89] found id: ""
	I0819 13:24:24.116368 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.116380 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:24.116388 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:24.116449 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:24.157126 2152780 cri.go:89] found id: ""
	I0819 13:24:24.157154 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.157163 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:24.157170 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:24.157224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:24.197124 2152780 cri.go:89] found id: ""
	I0819 13:24:24.197153 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.197161 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:24.197167 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:24.197224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:24.232662 2152780 cri.go:89] found id: ""
	I0819 13:24:24.232690 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.232698 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:24.232709 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:24.232721 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.310535 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.310573 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.351436 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:24.351469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:24.409022 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.409061 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.423392 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.423426 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:24.499638 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:27.000428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:27.016949 2152780 kubeadm.go:597] duration metric: took 4m2.883005064s to restartPrimaryControlPlane
	W0819 13:24:27.017029 2152780 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:27.017066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:27.478402 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:27.494351 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:24:27.505001 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:24:27.515444 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:24:27.515467 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:24:27.515523 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:24:27.524646 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:24:27.524697 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:24:27.535343 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:24:27.544402 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:24:27.544456 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:24:27.554655 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.564636 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:24:27.564709 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.575078 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:24:27.584856 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:24:27.584924 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:24:27.595099 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:24:27.675533 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:24:27.675638 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:24:27.827613 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:24:27.827770 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:24:27.827924 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:24:28.032127 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:24:28.033928 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:24:28.034048 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:24:28.034129 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:24:28.034243 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:24:28.034320 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:24:28.034497 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:24:28.034852 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:24:28.035397 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:24:28.035736 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:24:28.036257 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:24:28.036850 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:24:28.037014 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:24:28.037102 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:24:28.386155 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:24:28.571457 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:24:28.835239 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:24:28.967421 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:24:28.990941 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:24:28.991082 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:24:28.991148 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:24:29.149979 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:24:29.151782 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:24:29.151943 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:24:29.161108 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:24:29.161209 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:24:29.161316 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:24:29.172129 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:25:09.173035 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:25:09.173672 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:09.173913 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:14.174468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:14.174714 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:24.175179 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:24.175432 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:44.175962 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:44.176268 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:26:24.177907 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:26:24.178201 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:26:24.178214 2152780 kubeadm.go:310] 
	I0819 13:26:24.178291 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:26:24.178372 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:26:24.178382 2152780 kubeadm.go:310] 
	I0819 13:26:24.178424 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:26:24.178476 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:26:24.178656 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:26:24.178674 2152780 kubeadm.go:310] 
	I0819 13:26:24.178792 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:26:24.178834 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:26:24.178879 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:26:24.178885 2152780 kubeadm.go:310] 
	I0819 13:26:24.179039 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:26:24.179150 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:26:24.179160 2152780 kubeadm.go:310] 
	I0819 13:26:24.179343 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:26:24.179495 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:26:24.179612 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:26:24.179737 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:26:24.179753 2152780 kubeadm.go:310] 
	I0819 13:26:24.180394 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:24.180518 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:26:24.180601 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 13:26:24.180756 2152780 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 13:26:24.180807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:26:24.684112 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.697957 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:24.707579 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:24.707600 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:24.707649 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:24.717570 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:24.717642 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:24.727069 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:24.735692 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:24.735757 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:24.744522 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.753036 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:24.753090 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.763455 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:24.772780 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:24.772839 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:24.782898 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:24.856625 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:26:24.856732 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:25.003965 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:25.004118 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:25.004294 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:26:25.205556 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:25.207347 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:25.207462 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:25.207553 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:25.207671 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:25.207759 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:25.207840 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:25.207910 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:25.207978 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:25.208063 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:25.208526 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:25.209357 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:25.209668 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:25.209816 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:25.424000 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:25.667776 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:25.723895 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:25.817349 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:25.834327 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:25.836860 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:25.837016 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:25.992443 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:25.994085 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:26:25.994223 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:26.007880 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:26.009553 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:26.013310 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:26.014264 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:27:06.016907 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:27:06.017042 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:06.017274 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:11.017468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:11.017758 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:21.018328 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:21.018585 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:41.019394 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:41.019632 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018489 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:28:21.018784 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018807 2152780 kubeadm.go:310] 
	I0819 13:28:21.018879 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:28:21.018963 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:28:21.018971 2152780 kubeadm.go:310] 
	I0819 13:28:21.019000 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:28:21.019030 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:28:21.019122 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:28:21.019130 2152780 kubeadm.go:310] 
	I0819 13:28:21.019233 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:28:21.019276 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:28:21.019329 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:28:21.019337 2152780 kubeadm.go:310] 
	I0819 13:28:21.019471 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:28:21.019607 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:28:21.019620 2152780 kubeadm.go:310] 
	I0819 13:28:21.019776 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:28:21.019895 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:28:21.020016 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:28:21.020125 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:28:21.020139 2152780 kubeadm.go:310] 
	I0819 13:28:21.020943 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:28:21.021058 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:28:21.021158 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 13:28:21.021292 2152780 kubeadm.go:394] duration metric: took 7m56.942541209s to StartCluster
	I0819 13:28:21.021375 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:28:21.021456 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:28:21.068572 2152780 cri.go:89] found id: ""
	I0819 13:28:21.068613 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.068624 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:28:21.068633 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:28:21.068704 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:28:21.108484 2152780 cri.go:89] found id: ""
	I0819 13:28:21.108514 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.108532 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:28:21.108540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:28:21.108611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:28:21.151863 2152780 cri.go:89] found id: ""
	I0819 13:28:21.151894 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.151905 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:28:21.151914 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:28:21.151992 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:28:21.191844 2152780 cri.go:89] found id: ""
	I0819 13:28:21.191873 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.191883 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:28:21.191891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:28:21.191970 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:28:21.230545 2152780 cri.go:89] found id: ""
	I0819 13:28:21.230580 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.230588 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:28:21.230595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:28:21.230650 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:28:21.264157 2152780 cri.go:89] found id: ""
	I0819 13:28:21.264211 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.264225 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:28:21.264233 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:28:21.264301 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:28:21.297523 2152780 cri.go:89] found id: ""
	I0819 13:28:21.297558 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.297566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:28:21.297573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:28:21.297630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:28:21.336765 2152780 cri.go:89] found id: ""
	I0819 13:28:21.336793 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.336802 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:28:21.336815 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:28:21.336839 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:28:21.387668 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:28:21.387711 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:28:21.402131 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:28:21.402173 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:28:21.473587 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:28:21.473615 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:28:21.473632 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:28:21.576567 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:28:21.576610 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 13:28:21.614859 2152780 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 13:28:21.614919 2152780 out.go:270] * 
	* 
	W0819 13:28:21.615011 2152780 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.615032 2152780 out.go:270] * 
	* 
	W0819 13:28:21.615837 2152780 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:28:21.619494 2152780 out.go:201] 
	W0819 13:28:21.620743 2152780 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.620793 2152780 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 13:28:21.620813 2152780 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 13:28:21.622143 2152780 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-620467 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467: exit status 2 (247.933836ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-620467 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-620467 logs -n 25: (1.58726433s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-093052             | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-093052                  | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-093052 --memory=2200 --alsologtostderr   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-038572  | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC |                     |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-093052 image list                           | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-441627                  | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-620467        | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-823470            | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-038572       | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:24 UTC |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-620467             | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-823470                 | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC | 19 Aug 24 13:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 13:17:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 13:17:40.922225 2153523 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:17:40.922462 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922470 2153523 out.go:358] Setting ErrFile to fd 2...
	I0819 13:17:40.922475 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922631 2153523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:17:40.923154 2153523 out.go:352] Setting JSON to false
	I0819 13:17:40.924134 2153523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":165611,"bootTime":1723907850,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:17:40.924233 2153523 start.go:139] virtualization: kvm guest
	I0819 13:17:40.927254 2153523 out.go:177] * [embed-certs-823470] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:17:40.928415 2153523 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:17:40.928457 2153523 notify.go:220] Checking for updates...
	I0819 13:17:40.930421 2153523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:17:40.931440 2153523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:17:40.932454 2153523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:17:40.933815 2153523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:17:40.935189 2153523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:17:40.936761 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:17:40.937143 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.937209 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.952461 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0819 13:17:40.952868 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.953423 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.953455 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.953809 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.954009 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:40.954280 2153523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:17:40.954572 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.954610 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.970308 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0819 13:17:40.970901 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.971446 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.971478 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.971798 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.972002 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:41.006606 2153523 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 13:17:41.007689 2153523 start.go:297] selected driver: kvm2
	I0819 13:17:41.007712 2153523 start.go:901] validating driver "kvm2" against &{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.007878 2153523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:17:41.008964 2153523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.009068 2153523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:17:41.024433 2153523 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:17:41.024970 2153523 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:17:41.025027 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:17:41.025044 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:17:41.025112 2153523 start.go:340] cluster config:
	{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.025259 2153523 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.027074 2153523 out.go:177] * Starting "embed-certs-823470" primary control-plane node in "embed-certs-823470" cluster
	I0819 13:17:37.004434 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:41.028219 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:17:41.028273 2153523 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:17:41.028296 2153523 cache.go:56] Caching tarball of preloaded images
	I0819 13:17:41.028407 2153523 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:17:41.028424 2153523 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 13:17:41.028580 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:17:41.028861 2153523 start.go:360] acquireMachinesLock for embed-certs-823470: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:17:43.084440 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:46.156450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:52.236466 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:55.308454 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:01.388472 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:04.460482 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:10.540493 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:13.612449 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:19.692412 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:22.764436 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:28.844443 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:31.916476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:37.996470 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:41.068499 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:47.148488 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:50.220507 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:56.300450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:59.372511 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:05.452504 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:08.524521 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:14.604456 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:17.676406 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:23.756476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:26.828515 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:32.908445 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:35.980361 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:38.985083 2152365 start.go:364] duration metric: took 4m0.580875322s to acquireMachinesLock for "default-k8s-diff-port-038572"
	I0819 13:19:38.985152 2152365 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:38.985171 2152365 fix.go:54] fixHost starting: 
	I0819 13:19:38.985542 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:38.985578 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:39.001015 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I0819 13:19:39.001515 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:39.002024 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:19:39.002048 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:39.002379 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:39.002572 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:39.002734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:19:39.004320 2152365 fix.go:112] recreateIfNeeded on default-k8s-diff-port-038572: state=Stopped err=<nil>
	I0819 13:19:39.004361 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	W0819 13:19:39.004545 2152365 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:39.006073 2152365 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-038572" ...
	I0819 13:19:38.982266 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:38.982325 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982657 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:19:38.982688 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982924 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:19:38.984929 2151869 machine.go:96] duration metric: took 4m37.397817873s to provisionDockerMachine
	I0819 13:19:38.984974 2151869 fix.go:56] duration metric: took 4m37.421262151s for fixHost
	I0819 13:19:38.984981 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 4m37.421326953s
	W0819 13:19:38.985006 2151869 start.go:714] error starting host: provision: host is not running
	W0819 13:19:38.985119 2151869 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 13:19:38.985132 2151869 start.go:729] Will try again in 5 seconds ...
	I0819 13:19:39.007314 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Start
	I0819 13:19:39.007497 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring networks are active...
	I0819 13:19:39.008358 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network default is active
	I0819 13:19:39.008748 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network mk-default-k8s-diff-port-038572 is active
	I0819 13:19:39.009091 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Getting domain xml...
	I0819 13:19:39.009732 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Creating domain...
	I0819 13:19:40.228941 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting to get IP...
	I0819 13:19:40.229930 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230312 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230369 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.230279 2153980 retry.go:31] will retry after 222.94238ms: waiting for machine to come up
	I0819 13:19:40.455010 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455640 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.455562 2153980 retry.go:31] will retry after 314.241366ms: waiting for machine to come up
	I0819 13:19:40.771105 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771564 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771593 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.771511 2153980 retry.go:31] will retry after 475.191571ms: waiting for machine to come up
	I0819 13:19:41.248037 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248436 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248462 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.248377 2153980 retry.go:31] will retry after 547.125638ms: waiting for machine to come up
	I0819 13:19:41.797263 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797702 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.797653 2153980 retry.go:31] will retry after 581.6163ms: waiting for machine to come up
	I0819 13:19:42.380743 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381257 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:42.381195 2153980 retry.go:31] will retry after 661.080111ms: waiting for machine to come up
	I0819 13:19:43.044143 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044615 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044645 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.044566 2153980 retry.go:31] will retry after 815.359371ms: waiting for machine to come up
	I0819 13:19:43.986791 2151869 start.go:360] acquireMachinesLock for no-preload-441627: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:19:43.861792 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862277 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.862175 2153980 retry.go:31] will retry after 1.076851262s: waiting for machine to come up
	I0819 13:19:44.940944 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941544 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:44.941431 2153980 retry.go:31] will retry after 1.492394345s: waiting for machine to come up
	I0819 13:19:46.436200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436720 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:46.436617 2153980 retry.go:31] will retry after 1.567728586s: waiting for machine to come up
	I0819 13:19:48.006390 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006917 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:48.006810 2153980 retry.go:31] will retry after 1.799313771s: waiting for machine to come up
	I0819 13:19:49.807716 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808103 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808135 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:49.808029 2153980 retry.go:31] will retry after 2.910106345s: waiting for machine to come up
	I0819 13:19:52.722235 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722644 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:52.722554 2153980 retry.go:31] will retry after 2.846975794s: waiting for machine to come up
	I0819 13:19:56.801408 2152780 start.go:364] duration metric: took 3m10.884201023s to acquireMachinesLock for "old-k8s-version-620467"
	I0819 13:19:56.801488 2152780 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:56.801498 2152780 fix.go:54] fixHost starting: 
	I0819 13:19:56.802005 2152780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:56.802046 2152780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:56.819893 2152780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0819 13:19:56.820323 2152780 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:56.820785 2152780 main.go:141] libmachine: Using API Version  1
	I0819 13:19:56.820813 2152780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:56.821126 2152780 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:56.821314 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:19:56.821452 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetState
	I0819 13:19:56.823015 2152780 fix.go:112] recreateIfNeeded on old-k8s-version-620467: state=Stopped err=<nil>
	I0819 13:19:56.823056 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	W0819 13:19:56.823300 2152780 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:56.824822 2152780 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-620467" ...
	I0819 13:19:55.571761 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572317 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Found IP for machine: 192.168.72.69
	I0819 13:19:55.572351 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has current primary IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserving static IP address...
	I0819 13:19:55.572819 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.572863 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | skip adding static IP to network mk-default-k8s-diff-port-038572 - found existing host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"}
	I0819 13:19:55.572879 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserved static IP address: 192.168.72.69
	I0819 13:19:55.572897 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for SSH to be available...
	I0819 13:19:55.572914 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Getting to WaitForSSH function...
	I0819 13:19:55.575208 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575636 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.575662 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575751 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH client type: external
	I0819 13:19:55.575797 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa (-rw-------)
	I0819 13:19:55.575834 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:19:55.575851 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | About to run SSH command:
	I0819 13:19:55.575895 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | exit 0
	I0819 13:19:55.696082 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | SSH cmd err, output: <nil>: 
	I0819 13:19:55.696501 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetConfigRaw
	I0819 13:19:55.697226 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:55.699788 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700112 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.700146 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700429 2152365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/config.json ...
	I0819 13:19:55.700717 2152365 machine.go:93] provisionDockerMachine start ...
	I0819 13:19:55.700742 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:55.700975 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.703239 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703521 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.703556 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.703887 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704055 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.704360 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.704635 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.704651 2152365 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:19:55.804602 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:19:55.804632 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.804922 2152365 buildroot.go:166] provisioning hostname "default-k8s-diff-port-038572"
	I0819 13:19:55.804951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.805155 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.807793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.808144 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808318 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.808510 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808641 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808750 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.808888 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.809108 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.809123 2152365 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-038572 && echo "default-k8s-diff-port-038572" | sudo tee /etc/hostname
	I0819 13:19:55.922378 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-038572
	
	I0819 13:19:55.922428 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.925767 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926131 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.926172 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926350 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.926563 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926747 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.927130 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.927403 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.927433 2152365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-038572' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-038572/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-038572' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:19:56.033031 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:56.033082 2152365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:19:56.033108 2152365 buildroot.go:174] setting up certificates
	I0819 13:19:56.033121 2152365 provision.go:84] configureAuth start
	I0819 13:19:56.033132 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:56.033450 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.036371 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.036898 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.036926 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.037063 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.039494 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039814 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.039844 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039962 2152365 provision.go:143] copyHostCerts
	I0819 13:19:56.040055 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:19:56.040073 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:19:56.040181 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:19:56.040329 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:19:56.040342 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:19:56.040394 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:19:56.040582 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:19:56.040611 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:19:56.040666 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:19:56.040752 2152365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-038572 san=[127.0.0.1 192.168.72.69 default-k8s-diff-port-038572 localhost minikube]
	I0819 13:19:56.155224 2152365 provision.go:177] copyRemoteCerts
	I0819 13:19:56.155322 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:19:56.155366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.157977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158360 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.158393 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158573 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.158780 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.159002 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.159203 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.238700 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:19:56.263555 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0819 13:19:56.288212 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:19:56.311817 2152365 provision.go:87] duration metric: took 278.679594ms to configureAuth
	I0819 13:19:56.311851 2152365 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:19:56.312067 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:19:56.312187 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.315149 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315598 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.315630 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315801 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.315977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316115 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316244 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.316426 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.316653 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.316770 2152365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:19:56.575936 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:19:56.575977 2152365 machine.go:96] duration metric: took 875.241726ms to provisionDockerMachine
	I0819 13:19:56.576012 2152365 start.go:293] postStartSetup for "default-k8s-diff-port-038572" (driver="kvm2")
	I0819 13:19:56.576030 2152365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:19:56.576068 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.576426 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:19:56.576468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.579017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579420 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.579453 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579553 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.579726 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.579891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.580030 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.661017 2152365 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:19:56.665287 2152365 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:19:56.665315 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:19:56.665387 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:19:56.665469 2152365 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:19:56.665559 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:19:56.675115 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:19:56.698728 2152365 start.go:296] duration metric: took 122.697439ms for postStartSetup
	I0819 13:19:56.698771 2152365 fix.go:56] duration metric: took 17.713602497s for fixHost
	I0819 13:19:56.698793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.701384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701733 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.701766 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.702116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702284 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702444 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.702625 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.702822 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.702836 2152365 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:19:56.801168 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073596.772802095
	
	I0819 13:19:56.801210 2152365 fix.go:216] guest clock: 1724073596.772802095
	I0819 13:19:56.801223 2152365 fix.go:229] Guest: 2024-08-19 13:19:56.772802095 +0000 UTC Remote: 2024-08-19 13:19:56.698775345 +0000 UTC m=+258.438276725 (delta=74.02675ms)
	I0819 13:19:56.801278 2152365 fix.go:200] guest clock delta is within tolerance: 74.02675ms
	I0819 13:19:56.801287 2152365 start.go:83] releasing machines lock for "default-k8s-diff-port-038572", held for 17.81615508s
	I0819 13:19:56.801326 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.801669 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.804805 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805198 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.805242 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805377 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806035 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806291 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806387 2152365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:19:56.806442 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.806562 2152365 ssh_runner.go:195] Run: cat /version.json
	I0819 13:19:56.806592 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.809097 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809446 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.809476 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809513 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809599 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.809776 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.809951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810028 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.810053 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.810115 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.810256 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.810456 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.810627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810777 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.905573 2152365 ssh_runner.go:195] Run: systemctl --version
	I0819 13:19:56.911395 2152365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:19:57.053460 2152365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:19:57.060012 2152365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:19:57.060091 2152365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:19:57.076651 2152365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:19:57.076678 2152365 start.go:495] detecting cgroup driver to use...
	I0819 13:19:57.076746 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:19:57.095982 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:19:57.109198 2152365 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:19:57.109282 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:19:57.124552 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:19:57.141665 2152365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:19:57.250580 2152365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:19:57.395902 2152365 docker.go:233] disabling docker service ...
	I0819 13:19:57.395986 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:19:57.411232 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:19:57.424263 2152365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:19:57.576953 2152365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:19:57.695946 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:19:57.709501 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:19:57.727815 2152365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:19:57.727889 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.738210 2152365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:19:57.738284 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.748534 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.759949 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.770744 2152365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:19:57.781844 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.792047 2152365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.810916 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.826741 2152365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:19:57.839687 2152365 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:19:57.839753 2152365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:19:57.855192 2152365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:19:57.865326 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:19:57.991646 2152365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:19:58.136425 2152365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:19:58.136522 2152365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:19:58.141971 2152365 start.go:563] Will wait 60s for crictl version
	I0819 13:19:58.142047 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:19:58.146029 2152365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:19:58.182474 2152365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:19:58.182580 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.212887 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.242578 2152365 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:19:58.243711 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:58.246652 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.246985 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:58.247031 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.247262 2152365 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 13:19:58.251445 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:19:58.266177 2152365 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:19:58.266324 2152365 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:19:58.266417 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:19:56.826004 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .Start
	I0819 13:19:56.826161 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring networks are active...
	I0819 13:19:56.826953 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network default is active
	I0819 13:19:56.827328 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network mk-old-k8s-version-620467 is active
	I0819 13:19:56.827789 2152780 main.go:141] libmachine: (old-k8s-version-620467) Getting domain xml...
	I0819 13:19:56.828485 2152780 main.go:141] libmachine: (old-k8s-version-620467) Creating domain...
	I0819 13:19:58.099058 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting to get IP...
	I0819 13:19:58.099866 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.100256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.100359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.100246 2154107 retry.go:31] will retry after 217.468132ms: waiting for machine to come up
	I0819 13:19:58.319821 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.320335 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.320359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.320295 2154107 retry.go:31] will retry after 270.947251ms: waiting for machine to come up
	I0819 13:19:58.592897 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.593586 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.593610 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.593526 2154107 retry.go:31] will retry after 389.864812ms: waiting for machine to come up
	I0819 13:19:58.985269 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.985722 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.985757 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.985674 2154107 retry.go:31] will retry after 559.916587ms: waiting for machine to come up
	I0819 13:19:59.547846 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:59.548504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:59.548538 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:59.548446 2154107 retry.go:31] will retry after 759.198751ms: waiting for machine to come up
	I0819 13:20:00.309569 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:00.310093 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:00.310126 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:00.310041 2154107 retry.go:31] will retry after 836.488095ms: waiting for machine to come up
	I0819 13:19:58.306336 2152365 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:19:58.306490 2152365 ssh_runner.go:195] Run: which lz4
	I0819 13:19:58.311147 2152365 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:19:58.315715 2152365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:19:58.315751 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:19:59.751536 2152365 crio.go:462] duration metric: took 1.440444093s to copy over tarball
	I0819 13:19:59.751611 2152365 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:01.909604 2152365 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157959479s)
	I0819 13:20:01.909646 2152365 crio.go:469] duration metric: took 2.158078046s to extract the tarball
	I0819 13:20:01.909667 2152365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:01.954789 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:02.002755 2152365 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:02.002786 2152365 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:02.002805 2152365 kubeadm.go:934] updating node { 192.168.72.69 8444 v1.31.0 crio true true} ...
	I0819 13:20:02.002970 2152365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-038572 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:02.003061 2152365 ssh_runner.go:195] Run: crio config
	I0819 13:20:02.061207 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:02.061238 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:02.061256 2152365 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:02.061295 2152365 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.69 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-038572 NodeName:default-k8s-diff-port-038572 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:02.061496 2152365 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.69
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-038572"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:02.061628 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:02.072573 2152365 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:02.072676 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:02.083114 2152365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0819 13:20:02.100736 2152365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:02.117359 2152365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 13:20:02.135642 2152365 ssh_runner.go:195] Run: grep 192.168.72.69	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:02.139682 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:02.152242 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:02.278949 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:02.296896 2152365 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572 for IP: 192.168.72.69
	I0819 13:20:02.296925 2152365 certs.go:194] generating shared ca certs ...
	I0819 13:20:02.296949 2152365 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:02.297166 2152365 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:02.297230 2152365 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:02.297248 2152365 certs.go:256] generating profile certs ...
	I0819 13:20:02.297375 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/client.key
	I0819 13:20:02.297441 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key.ca317f39
	I0819 13:20:02.297489 2152365 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key
	I0819 13:20:02.297641 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:02.297692 2152365 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:02.297705 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:02.297730 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:02.297752 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:02.297778 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:02.297821 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:02.298812 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:02.332660 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:02.376995 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:02.418243 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:02.455354 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 13:20:02.488839 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:02.514418 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:02.540797 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:20:02.565628 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:02.590617 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:02.614942 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:02.639455 2152365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:02.657597 2152365 ssh_runner.go:195] Run: openssl version
	I0819 13:20:02.663545 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:02.674396 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679061 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679138 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.685136 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:02.696428 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:02.707271 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.711988 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.712052 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.717933 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:02.728982 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:02.739866 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744728 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744799 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.750660 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:02.761203 2152365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:02.765996 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:02.772371 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:02.778453 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:02.784645 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:02.790342 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:02.795882 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:02.801355 2152365 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:02.801448 2152365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:02.801536 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.839586 2152365 cri.go:89] found id: ""
	I0819 13:20:02.839684 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:02.850336 2152365 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:02.850363 2152365 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:02.850424 2152365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:02.860676 2152365 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:02.861753 2152365 kubeconfig.go:125] found "default-k8s-diff-port-038572" server: "https://192.168.72.69:8444"
	I0819 13:20:02.863979 2152365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:02.875284 2152365 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.69
	I0819 13:20:02.875321 2152365 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:02.875337 2152365 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:02.875398 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.914277 2152365 cri.go:89] found id: ""
	I0819 13:20:02.914383 2152365 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:02.931976 2152365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:02.941897 2152365 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:02.941924 2152365 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:02.941994 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 13:20:02.951103 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:02.951179 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:02.960642 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 13:20:02.970204 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:02.970278 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:02.980235 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.989306 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:02.989371 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.998821 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 13:20:03.007907 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:03.007978 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:03.017698 2152365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:03.027048 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:03.158491 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:01.148388 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:01.148845 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:01.148874 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:01.148801 2154107 retry.go:31] will retry after 1.045849266s: waiting for machine to come up
	I0819 13:20:02.196768 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:02.197307 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:02.197341 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:02.197243 2154107 retry.go:31] will retry after 1.09397017s: waiting for machine to come up
	I0819 13:20:03.292733 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:03.293195 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:03.293233 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:03.293141 2154107 retry.go:31] will retry after 1.29393597s: waiting for machine to come up
	I0819 13:20:04.588965 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:04.589429 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:04.589463 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:04.589356 2154107 retry.go:31] will retry after 1.963033352s: waiting for machine to come up
	I0819 13:20:04.318435 2152365 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.159898195s)
	I0819 13:20:04.318475 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.536005 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.600370 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.701830 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:04.701954 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.202875 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.702925 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.718529 2152365 api_server.go:72] duration metric: took 1.01671668s to wait for apiserver process to appear ...
	I0819 13:20:05.718570 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:05.718598 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.393400 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.393440 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.393459 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.434960 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.434992 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.719450 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.725655 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:08.725689 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.218770 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.224234 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:09.224273 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.718902 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.727048 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:20:09.733609 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:09.733641 2152365 api_server.go:131] duration metric: took 4.015063389s to wait for apiserver health ...
	I0819 13:20:09.733652 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:09.733660 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:09.735422 2152365 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:06.553692 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:06.554176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:06.554208 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:06.554119 2154107 retry.go:31] will retry after 2.420122298s: waiting for machine to come up
	I0819 13:20:08.975682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:08.976129 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:08.976176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:08.976073 2154107 retry.go:31] will retry after 2.64333029s: waiting for machine to come up
	I0819 13:20:09.736901 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:09.747568 2152365 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:09.766156 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:09.777905 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:09.777944 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:09.777953 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:09.777963 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:09.777978 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:09.777990 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:20:09.778006 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:09.778017 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:09.778029 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:20:09.778040 2152365 system_pods.go:74] duration metric: took 11.855811ms to wait for pod list to return data ...
	I0819 13:20:09.778050 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:09.787946 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:09.787991 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:09.788010 2152365 node_conditions.go:105] duration metric: took 9.95462ms to run NodePressure ...
	I0819 13:20:09.788035 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:10.079231 2152365 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083839 2152365 kubeadm.go:739] kubelet initialised
	I0819 13:20:10.083865 2152365 kubeadm.go:740] duration metric: took 4.599976ms waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083876 2152365 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:10.089020 2152365 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.094701 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094738 2152365 pod_ready.go:82] duration metric: took 5.686754ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.094751 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094760 2152365 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.099257 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099292 2152365 pod_ready.go:82] duration metric: took 4.514562ms for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.099308 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099317 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.103745 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103778 2152365 pod_ready.go:82] duration metric: took 4.449246ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.103799 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103811 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.170603 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170636 2152365 pod_ready.go:82] duration metric: took 66.80935ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.170649 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170655 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.569837 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569869 2152365 pod_ready.go:82] duration metric: took 399.203757ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.569879 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569892 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.970515 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970553 2152365 pod_ready.go:82] duration metric: took 400.651091ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.970571 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970600 2152365 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:11.371258 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371306 2152365 pod_ready.go:82] duration metric: took 400.692872ms for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:11.371324 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371335 2152365 pod_ready.go:39] duration metric: took 1.287441542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:11.371365 2152365 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:20:11.387535 2152365 ops.go:34] apiserver oom_adj: -16
	I0819 13:20:11.387559 2152365 kubeadm.go:597] duration metric: took 8.537187666s to restartPrimaryControlPlane
	I0819 13:20:11.387568 2152365 kubeadm.go:394] duration metric: took 8.586226152s to StartCluster
	I0819 13:20:11.387585 2152365 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.387664 2152365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:11.389218 2152365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.389474 2152365 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:20:11.389580 2152365 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:20:11.389706 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:11.389711 2152365 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389755 2152365 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-038572"
	I0819 13:20:11.389764 2152365 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-038572"
	W0819 13:20:11.389767 2152365 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:20:11.389751 2152365 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389804 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.389806 2152365 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.389818 2152365 addons.go:243] addon metrics-server should already be in state true
	I0819 13:20:11.389827 2152365 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-038572"
	I0819 13:20:11.389849 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.390217 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390246 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390254 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390283 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390305 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390341 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.391417 2152365 out.go:177] * Verifying Kubernetes components...
	I0819 13:20:11.392645 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:11.406609 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0819 13:20:11.407081 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.407694 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.407723 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.408143 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.408421 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.411206 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0819 13:20:11.411468 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0819 13:20:11.411740 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.411900 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.412296 2152365 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.412319 2152365 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:20:11.412335 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412351 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.412393 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412482 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412508 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412744 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.412758 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.412793 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.412794 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.413326 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413350 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413374 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.413399 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.429280 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40269
	I0819 13:20:11.429783 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.430330 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.430354 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.430797 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.430981 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.433018 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.433214 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I0819 13:20:11.433651 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.433708 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0819 13:20:11.434138 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434152 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434199 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.434455 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.434636 2152365 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:11.434806 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434830 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434988 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.435019 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.435164 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.435375 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.435924 2152365 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.435944 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:20:11.435965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.437309 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.438592 2152365 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:20:11.439482 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.439584 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:20:11.439598 2152365 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:20:11.439617 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.439905 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.439943 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.440077 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.440243 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.440399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.440517 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.442627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.442965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.442990 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.443134 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.443274 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.443500 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.443627 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.457318 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0819 13:20:11.457870 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.458444 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.458481 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.458854 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.459017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.460870 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.461105 2152365 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.461121 2152365 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:20:11.461138 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.463902 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.464415 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464581 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.464764 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.464942 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.465089 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.620783 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:11.641158 2152365 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:11.745067 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.783221 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:20:11.783260 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:20:11.791134 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.848325 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:20:11.848359 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:20:11.955559 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:11.955670 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:20:12.022022 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:12.892823 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.101637914s)
	I0819 13:20:12.892896 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.892909 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.892966 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147858081s)
	I0819 13:20:12.893019 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893033 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893201 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893327 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893354 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893366 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893375 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893377 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893386 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893391 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893630 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893681 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893704 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893677 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893729 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893657 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.899954 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.899970 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.900202 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.900220 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.900223 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972429 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972465 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.972842 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972866 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.972882 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.972897 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.973181 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.973225 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.973235 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.973289 2152365 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-038572"
	I0819 13:20:12.975602 2152365 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:20:12.976614 2152365 addons.go:510] duration metric: took 1.587039687s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:20:15.704949 2153523 start.go:364] duration metric: took 2m34.676043713s to acquireMachinesLock for "embed-certs-823470"
	I0819 13:20:15.705034 2153523 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:15.705058 2153523 fix.go:54] fixHost starting: 
	I0819 13:20:15.705598 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:15.705649 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:15.726423 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0819 13:20:15.726876 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:15.727565 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:20:15.727603 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:15.727986 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:15.728273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:15.728457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:20:15.730123 2153523 fix.go:112] recreateIfNeeded on embed-certs-823470: state=Stopped err=<nil>
	I0819 13:20:15.730167 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	W0819 13:20:15.730354 2153523 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:15.732004 2153523 out.go:177] * Restarting existing kvm2 VM for "embed-certs-823470" ...
	I0819 13:20:11.621327 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:11.621751 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:11.621783 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:11.621716 2154107 retry.go:31] will retry after 2.756598641s: waiting for machine to come up
	I0819 13:20:14.379562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380111 2152780 main.go:141] libmachine: (old-k8s-version-620467) Found IP for machine: 192.168.39.238
	I0819 13:20:14.380135 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserving static IP address...
	I0819 13:20:14.380149 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has current primary IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380616 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.380648 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserved static IP address: 192.168.39.238
	I0819 13:20:14.380673 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | skip adding static IP to network mk-old-k8s-version-620467 - found existing host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"}
	I0819 13:20:14.380687 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Getting to WaitForSSH function...
	I0819 13:20:14.380699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting for SSH to be available...
	I0819 13:20:14.383088 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383484 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.383516 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383650 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH client type: external
	I0819 13:20:14.383682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa (-rw-------)
	I0819 13:20:14.383720 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:14.383731 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | About to run SSH command:
	I0819 13:20:14.383745 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | exit 0
	I0819 13:20:14.508402 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:14.508766 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetConfigRaw
	I0819 13:20:14.509427 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.512053 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.512573 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512759 2152780 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/config.json ...
	I0819 13:20:14.512944 2152780 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:14.512962 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:14.513159 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.515266 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515554 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.515588 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515743 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.515901 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516039 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516142 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.516308 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.516582 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.516597 2152780 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:14.628611 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:14.628650 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.628968 2152780 buildroot.go:166] provisioning hostname "old-k8s-version-620467"
	I0819 13:20:14.629003 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.629202 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.631844 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632148 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.632211 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632516 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.632706 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632870 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632975 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.633140 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.633383 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.633400 2152780 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-620467 && echo "old-k8s-version-620467" | sudo tee /etc/hostname
	I0819 13:20:14.759803 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-620467
	
	I0819 13:20:14.759840 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.763058 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763513 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.763553 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763807 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.764027 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764208 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764523 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.764729 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.764986 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.765013 2152780 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-620467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-620467/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-620467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:14.890791 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:14.890828 2152780 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:14.890861 2152780 buildroot.go:174] setting up certificates
	I0819 13:20:14.890877 2152780 provision.go:84] configureAuth start
	I0819 13:20:14.890890 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.891337 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.894171 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894527 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.894562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894695 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.897240 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.897791 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.897825 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.898011 2152780 provision.go:143] copyHostCerts
	I0819 13:20:14.898080 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:14.898104 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:14.898169 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:14.898317 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:14.898330 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:14.898360 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:14.898434 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:14.898444 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:14.898472 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:14.898542 2152780 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-620467 san=[127.0.0.1 192.168.39.238 localhost minikube old-k8s-version-620467]
	I0819 13:20:15.009796 2152780 provision.go:177] copyRemoteCerts
	I0819 13:20:15.009880 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:15.009931 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.012857 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013153 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.013187 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013486 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.013699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.013862 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.013990 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.104066 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:15.131682 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 13:20:15.158717 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:15.186486 2152780 provision.go:87] duration metric: took 295.591794ms to configureAuth
	I0819 13:20:15.186518 2152780 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:15.186739 2152780 config.go:182] Loaded profile config "old-k8s-version-620467": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 13:20:15.186833 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.189580 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.189997 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.190028 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.190212 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.190389 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190528 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190646 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.190784 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.190958 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.190973 2152780 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:15.461836 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:15.461866 2152780 machine.go:96] duration metric: took 948.910153ms to provisionDockerMachine
	I0819 13:20:15.461878 2152780 start.go:293] postStartSetup for "old-k8s-version-620467" (driver="kvm2")
	I0819 13:20:15.461894 2152780 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:15.461913 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.462313 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:15.462348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.465294 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465707 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.465743 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465949 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.466136 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.466302 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.466443 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.551428 2152780 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:15.556041 2152780 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:15.556063 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:15.556137 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:15.556267 2152780 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:15.556410 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:15.565971 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:15.592103 2152780 start.go:296] duration metric: took 130.206992ms for postStartSetup
	I0819 13:20:15.592153 2152780 fix.go:56] duration metric: took 18.790655174s for fixHost
	I0819 13:20:15.592201 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.595223 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595583 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.595623 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595769 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.596033 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596249 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596391 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.596578 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.596767 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.596780 2152780 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:15.704773 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073615.677156491
	
	I0819 13:20:15.704802 2152780 fix.go:216] guest clock: 1724073615.677156491
	I0819 13:20:15.704811 2152780 fix.go:229] Guest: 2024-08-19 13:20:15.677156491 +0000 UTC Remote: 2024-08-19 13:20:15.592158206 +0000 UTC m=+209.822213223 (delta=84.998285ms)
	I0819 13:20:15.704854 2152780 fix.go:200] guest clock delta is within tolerance: 84.998285ms
	I0819 13:20:15.704862 2152780 start.go:83] releasing machines lock for "old-k8s-version-620467", held for 18.903410478s
	I0819 13:20:15.704893 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.705211 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:15.708190 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708528 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.708558 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708733 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709538 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709631 2152780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:15.709682 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.709793 2152780 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:15.709821 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.712366 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712759 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.712787 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712818 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712943 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713113 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713309 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.713374 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.713407 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.713481 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.713573 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713714 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713872 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.714033 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.732993 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Start
	I0819 13:20:15.733180 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring networks are active...
	I0819 13:20:15.734170 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network default is active
	I0819 13:20:15.734566 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network mk-embed-certs-823470 is active
	I0819 13:20:15.734992 2153523 main.go:141] libmachine: (embed-certs-823470) Getting domain xml...
	I0819 13:20:15.735675 2153523 main.go:141] libmachine: (embed-certs-823470) Creating domain...
	I0819 13:20:15.818839 2152780 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:15.827091 2152780 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:15.989552 2152780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:15.995991 2152780 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:15.996067 2152780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:16.012594 2152780 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:16.012622 2152780 start.go:495] detecting cgroup driver to use...
	I0819 13:20:16.012691 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:16.030071 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:16.048651 2152780 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:16.048715 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:16.063500 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:16.078370 2152780 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:16.194353 2152780 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:16.347921 2152780 docker.go:233] disabling docker service ...
	I0819 13:20:16.347992 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:16.363528 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:16.377515 2152780 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:16.526660 2152780 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:16.672207 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:16.686323 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:16.707555 2152780 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 13:20:16.707634 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.718679 2152780 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:16.718775 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.729446 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.740145 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.751813 2152780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:16.767680 2152780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:16.779680 2152780 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:16.779754 2152780 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:16.798205 2152780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:16.812684 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:16.999666 2152780 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:17.159585 2152780 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:17.159673 2152780 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:17.165707 2152780 start.go:563] Will wait 60s for crictl version
	I0819 13:20:17.165781 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:17.170949 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:17.222170 2152780 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:17.222248 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.250627 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.278578 2152780 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 13:20:13.644589 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:15.645507 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:18.146247 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:17.279632 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:17.282855 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:17.283289 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283501 2152780 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:17.288251 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:17.301778 2152780 kubeadm.go:883] updating cluster {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:17.301950 2152780 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 13:20:17.302022 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:17.348389 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:17.348477 2152780 ssh_runner.go:195] Run: which lz4
	I0819 13:20:17.352759 2152780 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:17.357046 2152780 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:17.357075 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 13:20:19.106587 2152780 crio.go:462] duration metric: took 1.753868264s to copy over tarball
	I0819 13:20:19.106686 2152780 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:17.131501 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting to get IP...
	I0819 13:20:17.132786 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.133215 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.133322 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.133191 2154333 retry.go:31] will retry after 299.991818ms: waiting for machine to come up
	I0819 13:20:17.434764 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.435181 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.435207 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.435148 2154333 retry.go:31] will retry after 236.856343ms: waiting for machine to come up
	I0819 13:20:17.673895 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.674570 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.674599 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.674464 2154333 retry.go:31] will retry after 393.585081ms: waiting for machine to come up
	I0819 13:20:18.070571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.071042 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.071102 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.070993 2154333 retry.go:31] will retry after 544.808617ms: waiting for machine to come up
	I0819 13:20:18.617933 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.618430 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.618462 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.618383 2154333 retry.go:31] will retry after 679.633919ms: waiting for machine to come up
	I0819 13:20:19.299719 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.300300 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.300332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.300244 2154333 retry.go:31] will retry after 612.319499ms: waiting for machine to come up
	I0819 13:20:19.914955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.915514 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.915550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.915448 2154333 retry.go:31] will retry after 831.709673ms: waiting for machine to come up
	I0819 13:20:20.748907 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:20.749441 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:20.749470 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:20.749386 2154333 retry.go:31] will retry after 1.039050841s: waiting for machine to come up
	I0819 13:20:19.147118 2152365 node_ready.go:49] node "default-k8s-diff-port-038572" has status "Ready":"True"
	I0819 13:20:19.147148 2152365 node_ready.go:38] duration metric: took 7.505954816s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:19.147159 2152365 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:19.156331 2152365 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.164963 2152365 pod_ready.go:93] pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:19.164994 2152365 pod_ready.go:82] duration metric: took 8.624394ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.165007 2152365 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:21.171762 2152365 pod_ready.go:103] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:22.814104 2152365 pod_ready.go:93] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.814140 2152365 pod_ready.go:82] duration metric: took 3.649123155s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.814157 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820468 2152365 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.820494 2152365 pod_ready.go:82] duration metric: took 6.328831ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820505 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826256 2152365 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.826288 2152365 pod_ready.go:82] duration metric: took 5.775237ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826302 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832133 2152365 pod_ready.go:93] pod "kube-proxy-wnkk2" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.832182 2152365 pod_ready.go:82] duration metric: took 5.850742ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832198 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837910 2152365 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.837938 2152365 pod_ready.go:82] duration metric: took 5.730926ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837952 2152365 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.098419 2152780 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.991694091s)
	I0819 13:20:22.098466 2152780 crio.go:469] duration metric: took 2.991838198s to extract the tarball
	I0819 13:20:22.098476 2152780 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:22.143215 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:22.183884 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:22.183917 2152780 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:22.183992 2152780 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.184012 2152780 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.184040 2152780 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 13:20:22.184050 2152780 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.183999 2152780 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.184002 2152780 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.184087 2152780 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.184133 2152780 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185819 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.185861 2152780 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 13:20:22.185868 2152780 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.185909 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.185940 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185825 2152780 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.355834 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.356969 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.361175 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.382402 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.393972 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.417644 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 13:20:22.451269 2152780 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 13:20:22.451335 2152780 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.451333 2152780 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 13:20:22.451374 2152780 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.451393 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.451428 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.479570 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.490369 2152780 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 13:20:22.490426 2152780 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.490484 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.519930 2152780 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 13:20:22.519990 2152780 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.520062 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.527794 2152780 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 13:20:22.527836 2152780 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.527922 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540287 2152780 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 13:20:22.540337 2152780 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 13:20:22.540362 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.540381 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540423 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665204 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.665271 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.665317 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.665410 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665441 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.665481 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.725932 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.821227 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.825977 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.826179 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.826276 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.826365 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.826517 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.908181 2152780 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 13:20:22.908249 2152780 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.908306 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.974136 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:23.008192 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 13:20:23.008282 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:23.008318 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:23.008372 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 13:20:23.008406 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:23.008416 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.041323 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 13:20:23.119173 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 13:20:23.119212 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.119216 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 13:20:23.119787 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 13:20:23.158596 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.193414 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 13:20:23.193510 2152780 cache_images.go:92] duration metric: took 1.009576721s to LoadCachedImages
	W0819 13:20:23.193613 2152780 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0819 13:20:23.193632 2152780 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.20.0 crio true true} ...
	I0819 13:20:23.193746 2152780 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-620467 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:23.193835 2152780 ssh_runner.go:195] Run: crio config
	I0819 13:20:23.255630 2152780 cni.go:84] Creating CNI manager for ""
	I0819 13:20:23.255661 2152780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:23.255677 2152780 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:23.255711 2152780 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-620467 NodeName:old-k8s-version-620467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 13:20:23.255897 2152780 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-620467"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:23.255980 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 13:20:23.269531 2152780 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:23.269612 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:23.282542 2152780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0819 13:20:23.302755 2152780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:23.322417 2152780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 13:20:23.342984 2152780 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:23.347053 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:23.360095 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:23.497495 2152780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:23.517984 2152780 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467 for IP: 192.168.39.238
	I0819 13:20:23.518059 2152780 certs.go:194] generating shared ca certs ...
	I0819 13:20:23.518125 2152780 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:23.518359 2152780 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:23.518429 2152780 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:23.518442 2152780 certs.go:256] generating profile certs ...
	I0819 13:20:23.518572 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.key
	I0819 13:20:23.518651 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key.53723c14
	I0819 13:20:23.518701 2152780 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key
	I0819 13:20:23.518856 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:23.518895 2152780 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:23.518909 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:23.518941 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:23.518969 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:23.519009 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:23.519073 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:23.519828 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:23.574690 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:23.616732 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:23.659531 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:23.706724 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 13:20:23.742394 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:23.774793 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:23.801731 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:23.827344 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:23.855338 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:23.880700 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:23.906324 2152780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:23.923792 2152780 ssh_runner.go:195] Run: openssl version
	I0819 13:20:23.929722 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:23.940623 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946214 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946316 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.952580 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:23.963550 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:23.974639 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979429 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979515 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.985465 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:23.996822 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:24.008876 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013845 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013909 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.019736 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:24.032573 2152780 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:24.038620 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:24.046070 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:24.052573 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:24.059242 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:24.065487 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:24.072116 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:24.078756 2152780 kubeadm.go:392] StartCluster: {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:24.078878 2152780 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:24.078950 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.122078 2152780 cri.go:89] found id: ""
	I0819 13:20:24.122170 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:24.133909 2152780 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:24.133932 2152780 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:24.133989 2152780 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:24.144440 2152780 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:24.145449 2152780 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-620467" does not appear in /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:24.146113 2152780 kubeconfig.go:62] /home/jenkins/minikube-integration/19479-2086935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-620467" cluster setting kubeconfig missing "old-k8s-version-620467" context setting]
	I0819 13:20:24.147113 2152780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:24.226327 2152780 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:24.236890 2152780 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.238
	I0819 13:20:24.236922 2152780 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:24.236936 2152780 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:24.236978 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.274144 2152780 cri.go:89] found id: ""
	I0819 13:20:24.274220 2152780 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:24.291049 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:24.302678 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:24.302699 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:24.302746 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:24.312055 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:24.312122 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:24.321518 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:24.330507 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:24.330570 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:24.340192 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.349855 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:24.349914 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.359243 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:24.369020 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:24.369100 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:24.379184 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:24.389785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:24.524009 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.159266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.413924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.519529 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.635520 2152780 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:25.635627 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:21.790800 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:21.791370 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:21.791400 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:21.791324 2154333 retry.go:31] will retry after 1.360798876s: waiting for machine to come up
	I0819 13:20:23.153492 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:23.153913 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:23.153942 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:23.153859 2154333 retry.go:31] will retry after 2.180186803s: waiting for machine to come up
	I0819 13:20:25.337305 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:25.337871 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:25.337903 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:25.337719 2154333 retry.go:31] will retry after 1.933341809s: waiting for machine to come up
	I0819 13:20:24.846640 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:27.344995 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:26.135764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:26.636618 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.136024 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.636046 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.135975 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.636411 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.136478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.636585 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.136183 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.636717 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.273276 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:27.273766 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:27.273796 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:27.273706 2154333 retry.go:31] will retry after 3.242792205s: waiting for machine to come up
	I0819 13:20:30.520205 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:30.520644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:30.520679 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:30.520585 2154333 retry.go:31] will retry after 2.921954134s: waiting for machine to come up
	I0819 13:20:29.843436 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:31.843702 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.021186 2151869 start.go:364] duration metric: took 51.034331815s to acquireMachinesLock for "no-preload-441627"
	I0819 13:20:35.021266 2151869 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:35.021275 2151869 fix.go:54] fixHost starting: 
	I0819 13:20:35.021747 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:35.021788 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:35.039025 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0819 13:20:35.039480 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:35.039951 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:20:35.039978 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:35.040398 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:35.040610 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:35.040779 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:20:35.042642 2151869 fix.go:112] recreateIfNeeded on no-preload-441627: state=Stopped err=<nil>
	I0819 13:20:35.042670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	W0819 13:20:35.042852 2151869 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:35.044620 2151869 out.go:177] * Restarting existing kvm2 VM for "no-preload-441627" ...
	I0819 13:20:31.136686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:31.636239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.136372 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.635927 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.136039 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.635721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.136156 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.636069 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.136557 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.636478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.446149 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.446796 2153523 main.go:141] libmachine: (embed-certs-823470) Found IP for machine: 192.168.61.50
	I0819 13:20:33.446825 2153523 main.go:141] libmachine: (embed-certs-823470) Reserving static IP address...
	I0819 13:20:33.446843 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has current primary IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.447326 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.447357 2153523 main.go:141] libmachine: (embed-certs-823470) Reserved static IP address: 192.168.61.50
	I0819 13:20:33.447375 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | skip adding static IP to network mk-embed-certs-823470 - found existing host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"}
	I0819 13:20:33.447390 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Getting to WaitForSSH function...
	I0819 13:20:33.447402 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting for SSH to be available...
	I0819 13:20:33.449644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.449934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.449966 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.450063 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH client type: external
	I0819 13:20:33.450080 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa (-rw-------)
	I0819 13:20:33.450128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:33.450148 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | About to run SSH command:
	I0819 13:20:33.450160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | exit 0
	I0819 13:20:33.580225 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:33.580660 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetConfigRaw
	I0819 13:20:33.581310 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.583950 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584345 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.584382 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584697 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:20:33.584932 2153523 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:33.584961 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:33.585138 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.587429 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587735 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.587768 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587903 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.588123 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588333 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588507 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.588675 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.588873 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.588886 2153523 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:33.696497 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:33.696528 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696770 2153523 buildroot.go:166] provisioning hostname "embed-certs-823470"
	I0819 13:20:33.696788 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.699605 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.699955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.699991 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.700129 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.700374 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700531 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700668 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.700822 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.701007 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.701021 2153523 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-823470 && echo "embed-certs-823470" | sudo tee /etc/hostname
	I0819 13:20:33.826398 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-823470
	
	I0819 13:20:33.826444 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.829353 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.829751 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.829791 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.830030 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.830215 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830383 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830504 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.830662 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.830879 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.830896 2153523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-823470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-823470/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-823470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:33.952843 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:33.952881 2153523 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:33.952908 2153523 buildroot.go:174] setting up certificates
	I0819 13:20:33.952920 2153523 provision.go:84] configureAuth start
	I0819 13:20:33.952930 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.953214 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.955916 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956302 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.956332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956520 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.958787 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959101 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.959128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959257 2153523 provision.go:143] copyHostCerts
	I0819 13:20:33.959321 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:33.959341 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:33.959391 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:33.959505 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:33.959516 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:33.959537 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:33.959593 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:33.959600 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:33.959618 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:33.959670 2153523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.embed-certs-823470 san=[127.0.0.1 192.168.61.50 embed-certs-823470 localhost minikube]
	I0819 13:20:34.309596 2153523 provision.go:177] copyRemoteCerts
	I0819 13:20:34.309658 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:34.309693 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.312341 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312722 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.312757 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312897 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.313100 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.313221 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.313349 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.400305 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:34.423939 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:34.447427 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:34.474337 2153523 provision.go:87] duration metric: took 521.386141ms to configureAuth
	I0819 13:20:34.474374 2153523 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:34.474559 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:34.474655 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.477282 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477721 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.477754 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.478108 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478296 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478443 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.478661 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.478864 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.478886 2153523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:34.768186 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:34.768232 2153523 machine.go:96] duration metric: took 1.183282981s to provisionDockerMachine
	I0819 13:20:34.768244 2153523 start.go:293] postStartSetup for "embed-certs-823470" (driver="kvm2")
	I0819 13:20:34.768255 2153523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:34.768273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:34.768593 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:34.768626 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.771022 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771451 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.771485 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771636 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.771820 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.771979 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.772095 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.862868 2153523 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:34.867619 2153523 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:34.867659 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:34.867746 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:34.867837 2153523 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:34.867931 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:34.877409 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:34.907227 2153523 start.go:296] duration metric: took 138.963836ms for postStartSetup
	I0819 13:20:34.907285 2153523 fix.go:56] duration metric: took 19.202241044s for fixHost
	I0819 13:20:34.907315 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.909900 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910238 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.910273 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910466 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.910671 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910832 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910952 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.911132 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.911306 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.911316 2153523 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:35.020989 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073634.971972974
	
	I0819 13:20:35.021036 2153523 fix.go:216] guest clock: 1724073634.971972974
	I0819 13:20:35.021046 2153523 fix.go:229] Guest: 2024-08-19 13:20:34.971972974 +0000 UTC Remote: 2024-08-19 13:20:34.907292264 +0000 UTC m=+174.021390970 (delta=64.68071ms)
	I0819 13:20:35.021076 2153523 fix.go:200] guest clock delta is within tolerance: 64.68071ms
	I0819 13:20:35.021082 2153523 start.go:83] releasing machines lock for "embed-certs-823470", held for 19.316080353s
	I0819 13:20:35.021111 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.021414 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:35.024146 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024613 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.024651 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024793 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025291 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025478 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025561 2153523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:35.025612 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.025736 2153523 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:35.025759 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.028550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028973 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029003 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029031 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029062 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029272 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029359 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029551 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029597 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029723 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.029741 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029893 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.137494 2153523 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:35.145726 2153523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:35.296420 2153523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:35.303481 2153523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:35.303566 2153523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:35.323781 2153523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:35.323813 2153523 start.go:495] detecting cgroup driver to use...
	I0819 13:20:35.323893 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:35.343648 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:35.357864 2153523 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:35.357920 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:35.371911 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:35.388107 2153523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:35.496871 2153523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:35.635989 2153523 docker.go:233] disabling docker service ...
	I0819 13:20:35.636061 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:35.651531 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:35.665374 2153523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:35.810969 2153523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:35.045992 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Start
	I0819 13:20:35.046198 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring networks are active...
	I0819 13:20:35.046943 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network default is active
	I0819 13:20:35.047261 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network mk-no-preload-441627 is active
	I0819 13:20:35.047846 2151869 main.go:141] libmachine: (no-preload-441627) Getting domain xml...
	I0819 13:20:35.048719 2151869 main.go:141] libmachine: (no-preload-441627) Creating domain...
	I0819 13:20:36.329569 2151869 main.go:141] libmachine: (no-preload-441627) Waiting to get IP...
	I0819 13:20:36.330479 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.330862 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.330953 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.330842 2154491 retry.go:31] will retry after 214.344142ms: waiting for machine to come up
	I0819 13:20:35.931109 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:35.945746 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:35.963978 2153523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:35.964062 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.973860 2153523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:35.973944 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.983833 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.993733 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.003724 2153523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:36.014327 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.024944 2153523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.043960 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.054444 2153523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:36.064239 2153523 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:36.064325 2153523 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:36.082508 2153523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:36.092677 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:36.216592 2153523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:36.368516 2153523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:36.368599 2153523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:36.373579 2153523 start.go:563] Will wait 60s for crictl version
	I0819 13:20:36.373630 2153523 ssh_runner.go:195] Run: which crictl
	I0819 13:20:36.377433 2153523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:36.416842 2153523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:36.416933 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.445307 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.480522 2153523 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:33.845335 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.847273 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:36.135939 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.136635 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.635748 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.136679 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.636322 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.136560 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.635772 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.135797 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.635764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.481770 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:36.485210 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485665 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:36.485697 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485956 2153523 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:36.490383 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:36.504020 2153523 kubeadm.go:883] updating cluster {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:36.504151 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:36.504236 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:36.540784 2153523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:36.540871 2153523 ssh_runner.go:195] Run: which lz4
	I0819 13:20:36.545693 2153523 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:36.550278 2153523 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:36.550307 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:20:38.008380 2153523 crio.go:462] duration metric: took 1.462725002s to copy over tarball
	I0819 13:20:38.008468 2153523 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:40.267095 2153523 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258591648s)
	I0819 13:20:40.267126 2153523 crio.go:469] duration metric: took 2.258713555s to extract the tarball
	I0819 13:20:40.267133 2153523 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:40.307080 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:40.350343 2153523 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:40.350368 2153523 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:40.350377 2153523 kubeadm.go:934] updating node { 192.168.61.50 8443 v1.31.0 crio true true} ...
	I0819 13:20:40.350504 2153523 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-823470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:40.350594 2153523 ssh_runner.go:195] Run: crio config
	I0819 13:20:40.397534 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:40.397560 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:40.397582 2153523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:40.397612 2153523 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.50 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-823470 NodeName:embed-certs-823470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:40.397804 2153523 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-823470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:40.397880 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:40.409664 2153523 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:40.409738 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:40.420526 2153523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:20:40.437016 2153523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:40.454222 2153523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 13:20:40.471532 2153523 ssh_runner.go:195] Run: grep 192.168.61.50	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:40.475436 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:40.487570 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:40.612612 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:40.630016 2153523 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470 for IP: 192.168.61.50
	I0819 13:20:40.630049 2153523 certs.go:194] generating shared ca certs ...
	I0819 13:20:40.630073 2153523 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:40.630265 2153523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:40.630329 2153523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:40.630343 2153523 certs.go:256] generating profile certs ...
	I0819 13:20:40.630457 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/client.key
	I0819 13:20:40.630553 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key.208ee791
	I0819 13:20:40.630606 2153523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key
	I0819 13:20:40.630756 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:40.630812 2153523 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:40.630824 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:40.630859 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:40.630891 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:40.630915 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:40.630959 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:40.631651 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:40.660149 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:40.699120 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:40.726336 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:40.765233 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 13:20:40.804970 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 13:20:40.846737 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:40.871358 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:40.896688 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:40.920071 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:36.547335 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.547819 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.547855 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.547769 2154491 retry.go:31] will retry after 344.710336ms: waiting for machine to come up
	I0819 13:20:36.894655 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.895213 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.895249 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.895169 2154491 retry.go:31] will retry after 301.307077ms: waiting for machine to come up
	I0819 13:20:37.197664 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.198250 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.198286 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.198182 2154491 retry.go:31] will retry after 516.927721ms: waiting for machine to come up
	I0819 13:20:37.716831 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.717323 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.717359 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.717279 2154491 retry.go:31] will retry after 679.982986ms: waiting for machine to come up
	I0819 13:20:38.398678 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:38.399120 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:38.399157 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:38.399072 2154491 retry.go:31] will retry after 822.30179ms: waiting for machine to come up
	I0819 13:20:39.223045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:39.223575 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:39.223609 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:39.223519 2154491 retry.go:31] will retry after 936.040478ms: waiting for machine to come up
	I0819 13:20:40.161047 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:40.161593 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:40.161624 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:40.161538 2154491 retry.go:31] will retry after 1.301988289s: waiting for machine to come up
	I0819 13:20:38.347577 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:40.845179 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:42.845598 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:41.136326 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:41.635949 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.136274 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.636124 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.135998 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.636101 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.136242 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.635713 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.135796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.635702 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.943162 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:40.969648 2153523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:40.988367 2153523 ssh_runner.go:195] Run: openssl version
	I0819 13:20:40.994353 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:41.006057 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010833 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010908 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.016665 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:41.029047 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:41.040384 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044690 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044755 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.050358 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:41.061626 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:41.073480 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077867 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077941 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.083685 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:41.096051 2153523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:41.100589 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:41.106791 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:41.112757 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:41.119083 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:41.125060 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:41.130528 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:41.136615 2153523 kubeadm.go:392] StartCluster: {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:41.136781 2153523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:41.136848 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.184735 2153523 cri.go:89] found id: ""
	I0819 13:20:41.184820 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:41.195134 2153523 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:41.195162 2153523 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:41.195215 2153523 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:41.205105 2153523 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:41.206197 2153523 kubeconfig.go:125] found "embed-certs-823470" server: "https://192.168.61.50:8443"
	I0819 13:20:41.208434 2153523 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:41.218028 2153523 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.50
	I0819 13:20:41.218059 2153523 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:41.218073 2153523 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:41.218128 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.256254 2153523 cri.go:89] found id: ""
	I0819 13:20:41.256344 2153523 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:41.273223 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:41.283149 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:41.283169 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:41.283228 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:41.292541 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:41.292624 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:41.302453 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:41.312032 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:41.312090 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:41.322411 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.331738 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:41.331817 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.342148 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:41.353410 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:41.353506 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:41.365035 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:41.378556 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:41.502298 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.511293 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.008951961s)
	I0819 13:20:42.511332 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.728318 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.806475 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.889810 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:42.889917 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.390790 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.890775 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.939414 2153523 api_server.go:72] duration metric: took 1.049613567s to wait for apiserver process to appear ...
	I0819 13:20:43.939456 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:43.939495 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:43.940151 2153523 api_server.go:269] stopped: https://192.168.61.50:8443/healthz: Get "https://192.168.61.50:8443/healthz": dial tcp 192.168.61.50:8443: connect: connection refused
	I0819 13:20:44.439965 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:41.465077 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:41.465533 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:41.465577 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:41.465512 2154491 retry.go:31] will retry after 1.34269663s: waiting for machine to come up
	I0819 13:20:42.810485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:42.810988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:42.811017 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:42.810935 2154491 retry.go:31] will retry after 2.118455014s: waiting for machine to come up
	I0819 13:20:44.930721 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:44.931183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:44.931225 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:44.931151 2154491 retry.go:31] will retry after 2.224941461s: waiting for machine to come up
	I0819 13:20:44.846237 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:47.344303 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:46.957397 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.957427 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:46.957459 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:46.997895 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.997932 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:47.440580 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.444945 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.444984 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:47.939553 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.946543 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.946576 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.440206 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.446075 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.446114 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.939602 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.944480 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.944526 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.440053 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.444461 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.444502 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.940036 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.944425 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.944464 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:50.440006 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:50.445299 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:20:50.451900 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:50.451930 2153523 api_server.go:131] duration metric: took 6.512455813s to wait for apiserver health ...
	I0819 13:20:50.451941 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:50.451948 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:50.453835 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:46.135706 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:46.635815 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.136558 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.636410 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.136300 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.636143 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.136463 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.635682 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.135692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.635841 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.454977 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:50.466694 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:50.484966 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:50.507697 2153523 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:50.507738 2153523 system_pods.go:61] "coredns-6f6b679f8f-rxjvv" [7eae7e32-b4c3-4941-9241-16cdde1007fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:50.507750 2153523 system_pods.go:61] "etcd-embed-certs-823470" [e2684e0e-8c4a-40da-ae5b-aa7f3a077e45] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:50.507761 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [c438fbda-c1f4-49f8-8979-1194d91cdeab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:50.507770 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [528e4b3e-b3fd-4455-bb4f-79b4bbdcb434] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:50.507778 2153523 system_pods.go:61] "kube-proxy-jcnwd" [b5302230-969d-409e-8aba-98ff971e9e79] Running
	I0819 13:20:50.507787 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [213067c8-73c6-4eb8-b9dc-f4ea981421f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:50.507798 2153523 system_pods.go:61] "metrics-server-6867b74b74-8dwpf" [7e3ab377-075e-48d8-a9ce-191ba41e3ed7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:50.507812 2153523 system_pods.go:61] "storage-provisioner" [4dd080d9-174f-4c0c-9b06-39e98a71672b] Running
	I0819 13:20:50.507821 2153523 system_pods.go:74] duration metric: took 22.833545ms to wait for pod list to return data ...
	I0819 13:20:50.507831 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:50.512822 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:50.512850 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:50.512862 2153523 node_conditions.go:105] duration metric: took 5.023718ms to run NodePressure ...
	I0819 13:20:50.512877 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:50.853595 2153523 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859172 2153523 kubeadm.go:739] kubelet initialised
	I0819 13:20:50.859200 2153523 kubeadm.go:740] duration metric: took 5.576224ms waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859211 2153523 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:50.865571 2153523 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.871631 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871652 2153523 pod_ready.go:82] duration metric: took 6.059239ms for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.871660 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871666 2153523 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.875242 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875268 2153523 pod_ready.go:82] duration metric: took 3.594665ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.875279 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875287 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.879429 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879460 2153523 pod_ready.go:82] duration metric: took 4.156713ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.879471 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879485 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.887625 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887647 2153523 pod_ready.go:82] duration metric: took 8.149859ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.887655 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887662 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:47.157196 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:47.157589 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:47.157616 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:47.157532 2154491 retry.go:31] will retry after 3.412547504s: waiting for machine to come up
	I0819 13:20:50.571896 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:50.572460 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:50.572485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:50.572401 2154491 retry.go:31] will retry after 3.498419112s: waiting for machine to come up
	I0819 13:20:49.345181 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.346188 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.136307 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.136425 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.636034 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.136443 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.636129 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.135695 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.635860 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.136481 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.635655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.287880 2153523 pod_ready.go:93] pod "kube-proxy-jcnwd" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:51.287908 2153523 pod_ready.go:82] duration metric: took 400.238614ms for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:51.287922 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:53.294631 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.803650 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:54.073302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073795 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has current primary IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073819 2151869 main.go:141] libmachine: (no-preload-441627) Found IP for machine: 192.168.50.161
	I0819 13:20:54.073833 2151869 main.go:141] libmachine: (no-preload-441627) Reserving static IP address...
	I0819 13:20:54.074302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.074337 2151869 main.go:141] libmachine: (no-preload-441627) Reserved static IP address: 192.168.50.161
	I0819 13:20:54.074356 2151869 main.go:141] libmachine: (no-preload-441627) DBG | skip adding static IP to network mk-no-preload-441627 - found existing host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"}
	I0819 13:20:54.074382 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Getting to WaitForSSH function...
	I0819 13:20:54.074398 2151869 main.go:141] libmachine: (no-preload-441627) Waiting for SSH to be available...
	I0819 13:20:54.076312 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076686 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.076719 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076879 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH client type: external
	I0819 13:20:54.076920 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa (-rw-------)
	I0819 13:20:54.076961 2151869 main.go:141] libmachine: (no-preload-441627) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:54.076975 2151869 main.go:141] libmachine: (no-preload-441627) DBG | About to run SSH command:
	I0819 13:20:54.076988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | exit 0
	I0819 13:20:54.208334 2151869 main.go:141] libmachine: (no-preload-441627) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:54.208710 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetConfigRaw
	I0819 13:20:54.209373 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.211827 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212176 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.212210 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212409 2151869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/config.json ...
	I0819 13:20:54.212587 2151869 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:54.212607 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:54.212810 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.214965 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215290 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.215315 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215435 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.215615 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215735 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215887 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.216050 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.216244 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.216256 2151869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:54.332379 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:54.332411 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332718 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:20:54.332753 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332948 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.335520 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.335930 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.335959 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.336119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.336357 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336513 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336654 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.336918 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.337084 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.337104 2151869 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-441627 && echo "no-preload-441627" | sudo tee /etc/hostname
	I0819 13:20:54.466127 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-441627
	
	I0819 13:20:54.466178 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.469045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469434 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.469482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469648 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.469836 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470018 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470174 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.470406 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.470654 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.470676 2151869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-441627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-441627/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-441627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:54.595063 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:54.595111 2151869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:54.595162 2151869 buildroot.go:174] setting up certificates
	I0819 13:20:54.595178 2151869 provision.go:84] configureAuth start
	I0819 13:20:54.595191 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.595554 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.598703 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599031 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.599060 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599202 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.601785 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602203 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.602231 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602593 2151869 provision.go:143] copyHostCerts
	I0819 13:20:54.602661 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:54.602680 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:54.602745 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:54.602878 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:54.602892 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:54.602922 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:54.603010 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:54.603022 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:54.603043 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:54.603116 2151869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.no-preload-441627 san=[127.0.0.1 192.168.50.161 localhost minikube no-preload-441627]
	I0819 13:20:55.027338 2151869 provision.go:177] copyRemoteCerts
	I0819 13:20:55.027408 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:55.027438 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.030073 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.030550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030826 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.031038 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.031231 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.031436 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.122187 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:55.147222 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:55.170341 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:55.195484 2151869 provision.go:87] duration metric: took 600.289644ms to configureAuth
	I0819 13:20:55.195516 2151869 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:55.195757 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:55.195862 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.198859 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199267 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.199294 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199448 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.199666 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199834 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199941 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.200126 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.200340 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.200357 2151869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:55.485890 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:55.485920 2151869 machine.go:96] duration metric: took 1.273319956s to provisionDockerMachine
	I0819 13:20:55.485931 2151869 start.go:293] postStartSetup for "no-preload-441627" (driver="kvm2")
	I0819 13:20:55.485941 2151869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:55.485957 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.486340 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:55.486379 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.489099 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489514 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.489550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489657 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.489866 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.490073 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.490298 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.582461 2151869 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:55.587827 2151869 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:55.587857 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:55.587928 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:55.588015 2151869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:55.588115 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:55.600309 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:55.629229 2151869 start.go:296] duration metric: took 143.280186ms for postStartSetup
	I0819 13:20:55.629281 2151869 fix.go:56] duration metric: took 20.608007215s for fixHost
	I0819 13:20:55.629306 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.632299 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632733 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.632786 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632921 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.633155 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633422 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633640 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.633850 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.634098 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.634117 2151869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:55.752998 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073655.727816487
	
	I0819 13:20:55.753028 2151869 fix.go:216] guest clock: 1724073655.727816487
	I0819 13:20:55.753039 2151869 fix.go:229] Guest: 2024-08-19 13:20:55.727816487 +0000 UTC Remote: 2024-08-19 13:20:55.629285878 +0000 UTC m=+354.230914724 (delta=98.530609ms)
	I0819 13:20:55.753080 2151869 fix.go:200] guest clock delta is within tolerance: 98.530609ms
	I0819 13:20:55.753087 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 20.731851858s
	I0819 13:20:55.753119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.753381 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:55.756443 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.756954 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.756986 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.757124 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757639 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757822 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757927 2151869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:55.757999 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.758066 2151869 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:55.758098 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.760779 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761023 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761162 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761338 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761459 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761515 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761756 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.761845 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761895 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.762041 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.762192 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.846241 2151869 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:55.868048 2151869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:56.013592 2151869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:56.020322 2151869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:56.020403 2151869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:56.045197 2151869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:56.045234 2151869 start.go:495] detecting cgroup driver to use...
	I0819 13:20:56.045325 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:56.062336 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:56.078399 2151869 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:56.078476 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:56.094735 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:56.110097 2151869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:56.238433 2151869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:56.405540 2151869 docker.go:233] disabling docker service ...
	I0819 13:20:56.405638 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:56.421604 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:56.434166 2151869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:56.565116 2151869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:56.688471 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:56.709293 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:56.727575 2151869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:56.727658 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.738492 2151869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:56.738559 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.748563 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.758372 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.768341 2151869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:56.778351 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.788072 2151869 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.805215 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.815496 2151869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:56.825033 2151869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:56.825087 2151869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:56.839732 2151869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:56.851166 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:56.976815 2151869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:57.154020 2151869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:57.154114 2151869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:57.159214 2151869 start.go:563] Will wait 60s for crictl version
	I0819 13:20:57.159282 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.163467 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:57.209305 2151869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:57.209417 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.239241 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.269905 2151869 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:53.844042 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.845906 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:56.135714 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:56.635996 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.135847 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.635970 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.136650 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.636316 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.136506 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.636057 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.136545 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.636030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.297660 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:57.297685 2153523 pod_ready.go:82] duration metric: took 6.009754619s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:57.297696 2153523 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:59.304626 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:57.270865 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:57.273851 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274158 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:57.274186 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274425 2151869 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:57.278843 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:57.296818 2151869 kubeadm.go:883] updating cluster {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:57.296988 2151869 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:57.297042 2151869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:57.345101 2151869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:57.345130 2151869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:57.345185 2151869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.345236 2151869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.345463 2151869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.345553 2151869 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.345586 2151869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.345482 2151869 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 13:20:57.345492 2151869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.345496 2151869 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.347612 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.347625 2151869 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.347661 2151869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.347698 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.347797 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.348298 2151869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.348418 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.348485 2151869 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 13:20:57.521291 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.528953 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.530354 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.538749 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.539434 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.564916 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 13:20:57.572522 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.596418 2151869 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 13:20:57.596483 2151869 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.596536 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.632436 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 13:20:57.690627 2151869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.690666 2151869 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 13:20:57.690725 2151869 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.690677 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690776 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 13:20:57.690827 2151869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.690856 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.711488 2151869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 13:20:57.711546 2151869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.711597 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842702 2151869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 13:20:57.842794 2151869 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 13:20:57.842819 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.842815 2151869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.842845 2151869 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.842917 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842923 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842933 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.842769 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.863031 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.959879 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.959965 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.960047 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.960061 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.960110 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.960298 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.994432 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.109438 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.124785 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:58.124845 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:58.126704 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:58.126791 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:58.126801 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:58.184360 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.198177 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.280314 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 13:20:58.280424 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.280471 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 13:20:58.280579 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:20:58.291181 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 13:20:58.291246 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 13:20:58.291278 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:20:58.291295 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 13:20:58.291333 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:20:58.291370 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:20:58.329433 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 13:20:58.329497 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 13:20:58.329553 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:20:58.329588 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 13:20:58.329597 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:20:58.329602 2151869 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329640 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329660 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 13:20:58.329679 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 13:20:58.329704 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 13:20:58.329747 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 13:20:58.333814 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 13:21:00.287189 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.957515939s)
	I0819 13:21:00.287247 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 13:21:00.287270 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287208 2151869 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.957596318s)
	I0819 13:21:00.287323 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287327 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 13:20:58.346708 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:00.844688 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.845156 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:01.136475 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.136128 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.636458 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.136471 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.636080 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.135958 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.636494 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.136279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.636642 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.803995 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:03.804110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:05.805252 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.037910 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.750558372s)
	I0819 13:21:02.037948 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 13:21:02.037977 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:02.038031 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:04.606941 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.568877079s)
	I0819 13:21:04.606974 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 13:21:04.607003 2151869 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:04.607057 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:05.346854 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.108643 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:06.136247 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:06.636648 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.136212 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.636710 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.136112 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.636655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.136349 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.635778 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.136689 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.635931 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.305329 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:10.805529 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.376612 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.769523227s)
	I0819 13:21:08.376661 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 13:21:08.376690 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:08.376749 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:10.337756 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.960979615s)
	I0819 13:21:10.337805 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 13:21:10.337835 2151869 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.337888 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.982386 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 13:21:10.982434 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.982483 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.345178 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.345444 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:11.135937 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:11.636273 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.136428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.635907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.135893 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.636189 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.136600 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.636483 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.135856 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.635957 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.303597 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:15.804612 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.334483 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351971561s)
	I0819 13:21:12.334524 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 13:21:12.334561 2151869 cache_images.go:123] Successfully loaded all cached images
	I0819 13:21:12.334568 2151869 cache_images.go:92] duration metric: took 14.989422621s to LoadCachedImages
	I0819 13:21:12.334580 2151869 kubeadm.go:934] updating node { 192.168.50.161 8443 v1.31.0 crio true true} ...
	I0819 13:21:12.334746 2151869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-441627 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:21:12.334860 2151869 ssh_runner.go:195] Run: crio config
	I0819 13:21:12.396639 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:12.396662 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:12.396674 2151869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:21:12.396703 2151869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.161 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-441627 NodeName:no-preload-441627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:21:12.396877 2151869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-441627"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:21:12.396958 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:21:12.408527 2151869 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:21:12.408601 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:21:12.418596 2151869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:21:12.435620 2151869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:21:12.452829 2151869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 13:21:12.470756 2151869 ssh_runner.go:195] Run: grep 192.168.50.161	control-plane.minikube.internal$ /etc/hosts
	I0819 13:21:12.474802 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:21:12.486795 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:21:12.612626 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:21:12.639005 2151869 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627 for IP: 192.168.50.161
	I0819 13:21:12.639027 2151869 certs.go:194] generating shared ca certs ...
	I0819 13:21:12.639051 2151869 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:21:12.639231 2151869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:21:12.639296 2151869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:21:12.639310 2151869 certs.go:256] generating profile certs ...
	I0819 13:21:12.639441 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/client.key
	I0819 13:21:12.639535 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key.72b03406
	I0819 13:21:12.639591 2151869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key
	I0819 13:21:12.639752 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:21:12.639811 2151869 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:21:12.639822 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:21:12.639849 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:21:12.639874 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:21:12.639899 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:21:12.639944 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:21:12.640741 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:21:12.674749 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:21:12.700466 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:21:12.726553 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:21:12.753171 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 13:21:12.779864 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:21:12.817715 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:21:12.841844 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:21:12.867122 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:21:12.896767 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:21:12.921671 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:21:12.948478 2151869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:21:12.967603 2151869 ssh_runner.go:195] Run: openssl version
	I0819 13:21:12.973848 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:21:12.984584 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989193 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989248 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.995006 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:21:13.005510 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:21:13.017919 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022858 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022918 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.029040 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:21:13.039654 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:21:13.049968 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054446 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054502 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.060941 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:21:13.071954 2151869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:21:13.076513 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:21:13.082624 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:21:13.088523 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:21:13.094597 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:21:13.100589 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:21:13.106674 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:21:13.112637 2151869 kubeadm.go:392] StartCluster: {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:21:13.112759 2151869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:21:13.112804 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.153143 2151869 cri.go:89] found id: ""
	I0819 13:21:13.153230 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:21:13.164365 2151869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:21:13.164389 2151869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:21:13.164444 2151869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:21:13.174609 2151869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:21:13.175874 2151869 kubeconfig.go:125] found "no-preload-441627" server: "https://192.168.50.161:8443"
	I0819 13:21:13.178228 2151869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:21:13.188495 2151869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.161
	I0819 13:21:13.188536 2151869 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:21:13.188562 2151869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:21:13.188623 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.225547 2151869 cri.go:89] found id: ""
	I0819 13:21:13.225635 2151869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:21:13.242484 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:21:13.251983 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:21:13.252021 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:21:13.252085 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:21:13.260912 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:21:13.260984 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:21:13.270535 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:21:13.279500 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:21:13.279549 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:21:13.288906 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.297861 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:21:13.297914 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.307468 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:21:13.316801 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:21:13.316859 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:21:13.325756 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:21:13.336403 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:13.448374 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.613650 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.165234261s)
	I0819 13:21:14.613681 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.841371 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.904812 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:15.017425 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:21:15.017541 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.518613 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.018036 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.061238 2151869 api_server.go:72] duration metric: took 1.043828546s to wait for apiserver process to appear ...
	I0819 13:21:16.061271 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:21:16.061301 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:14.846330 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:17.344355 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:19.219563 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:21:19.219607 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:21:19.219627 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.350613 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.350647 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:19.562010 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.567205 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.567236 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.062225 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.085272 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:20.085311 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.561434 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.565983 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:21:20.574754 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:21:20.574779 2151869 api_server.go:131] duration metric: took 4.513500352s to wait for apiserver health ...
	I0819 13:21:20.574790 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:20.574797 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:20.576255 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:21:16.136030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.636437 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.135718 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.635876 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.136668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.636021 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.135933 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.636638 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.135892 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.636450 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.804980 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.306777 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.577369 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:21:20.603169 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:21:20.642872 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:21:20.657328 2151869 system_pods.go:59] 8 kube-system pods found
	I0819 13:21:20.657366 2151869 system_pods.go:61] "coredns-6f6b679f8f-4w764" [486b4748-9a70-47b1-b386-e5369ebc2a96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:21:20.657373 2151869 system_pods.go:61] "etcd-no-preload-441627" [8013da47-1ecb-47de-a82c-7b72529b68fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:21:20.657383 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [72190b64-c312-48c0-8494-27e1ad39d877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:21:20.657389 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [60773a38-952f-4f53-a3f3-e54acdf7d196] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:21:20.657395 2151869 system_pods.go:61] "kube-proxy-v76kp" [ac6ad27b-5ee4-4899-8744-82537285a195] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:21:20.657400 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [78dde452-d8d5-498d-a6b1-459e6cb9ac16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:21:20.657405 2151869 system_pods.go:61] "metrics-server-6867b74b74-hd777" [2ce49c7d-af76-459a-82f5-aaa8d2b97c60] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:21:20.657410 2151869 system_pods.go:61] "storage-provisioner" [b67a7c83-46d3-4d75-bfa2-ae944b0c0ab8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:21:20.657417 2151869 system_pods.go:74] duration metric: took 14.518946ms to wait for pod list to return data ...
	I0819 13:21:20.657424 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:21:20.661337 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:21:20.661362 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:21:20.661373 2151869 node_conditions.go:105] duration metric: took 3.944656ms to run NodePressure ...
	I0819 13:21:20.661435 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:20.951411 2151869 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955734 2151869 kubeadm.go:739] kubelet initialised
	I0819 13:21:20.955755 2151869 kubeadm.go:740] duration metric: took 4.317107ms waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955763 2151869 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:21:20.961038 2151869 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.965450 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965473 2151869 pod_ready.go:82] duration metric: took 4.406672ms for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.965481 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965488 2151869 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.969586 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969610 2151869 pod_ready.go:82] duration metric: took 4.112423ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.969620 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969626 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.973542 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973566 2151869 pod_ready.go:82] duration metric: took 3.932252ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.973575 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973583 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.046350 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046390 2151869 pod_ready.go:82] duration metric: took 72.796575ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:21.046405 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046414 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:19.346285 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.346824 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.136149 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:21.635766 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.136148 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.135942 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.636570 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.135982 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.635967 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.136750 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.635977 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:25.636070 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:25.675163 2152780 cri.go:89] found id: ""
	I0819 13:21:25.675196 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.675208 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:25.675216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:25.675294 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:25.711676 2152780 cri.go:89] found id: ""
	I0819 13:21:25.711702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.711711 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:25.711717 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:25.711777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:25.748448 2152780 cri.go:89] found id: ""
	I0819 13:21:25.748479 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.748490 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:25.748499 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:25.748574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:25.786555 2152780 cri.go:89] found id: ""
	I0819 13:21:25.786584 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.786597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:25.786606 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:25.786661 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:22.804004 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:24.804708 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.447200 2151869 pod_ready.go:93] pod "kube-proxy-v76kp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:21.447230 2151869 pod_ready.go:82] duration metric: took 400.803851ms for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.447241 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:23.452136 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.454487 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:23.845593 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:26.345971 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.819620 2152780 cri.go:89] found id: ""
	I0819 13:21:25.819654 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.819665 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:25.819672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:25.819734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:25.853303 2152780 cri.go:89] found id: ""
	I0819 13:21:25.853333 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.853342 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:25.853349 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:25.853401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:25.886285 2152780 cri.go:89] found id: ""
	I0819 13:21:25.886315 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.886325 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:25.886332 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:25.886389 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:25.921320 2152780 cri.go:89] found id: ""
	I0819 13:21:25.921347 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.921356 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:25.921367 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:25.921377 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:25.975400 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:25.975486 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:25.989715 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:25.989741 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:26.115378 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:26.115402 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:26.115415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:26.192370 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:26.192415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:28.739640 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:28.752652 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:28.752729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:28.791883 2152780 cri.go:89] found id: ""
	I0819 13:21:28.791910 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.791918 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:28.791925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:28.791977 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:28.829635 2152780 cri.go:89] found id: ""
	I0819 13:21:28.829668 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.829678 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:28.829686 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:28.829756 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:28.866768 2152780 cri.go:89] found id: ""
	I0819 13:21:28.866804 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.866817 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:28.866825 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:28.866897 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:28.901921 2152780 cri.go:89] found id: ""
	I0819 13:21:28.901954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.901963 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:28.901970 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:28.902025 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:28.945421 2152780 cri.go:89] found id: ""
	I0819 13:21:28.945449 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.945458 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:28.945468 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:28.945535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:28.982857 2152780 cri.go:89] found id: ""
	I0819 13:21:28.982893 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.982905 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:28.982918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:28.982978 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:29.020378 2152780 cri.go:89] found id: ""
	I0819 13:21:29.020415 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.020427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:29.020436 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:29.020502 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:29.055782 2152780 cri.go:89] found id: ""
	I0819 13:21:29.055817 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.055829 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:29.055841 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:29.055864 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:29.107722 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:29.107761 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:29.122304 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:29.122364 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:29.193379 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:29.193415 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:29.193435 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:29.271539 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:29.271578 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:27.304409 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.306462 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:27.952426 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.954179 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:28.346305 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:30.845538 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:31.819829 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:31.832354 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:31.832427 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:31.873620 2152780 cri.go:89] found id: ""
	I0819 13:21:31.873647 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.873656 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:31.873662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:31.873722 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:31.905857 2152780 cri.go:89] found id: ""
	I0819 13:21:31.905891 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.905903 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:31.905911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:31.905993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:31.942187 2152780 cri.go:89] found id: ""
	I0819 13:21:31.942215 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.942227 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:31.942235 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:31.942307 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:31.981439 2152780 cri.go:89] found id: ""
	I0819 13:21:31.981475 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.981486 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:31.981494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:31.981569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:32.019863 2152780 cri.go:89] found id: ""
	I0819 13:21:32.019900 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.019922 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:32.019934 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:32.020000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:32.055007 2152780 cri.go:89] found id: ""
	I0819 13:21:32.055035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.055044 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:32.055050 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:32.055117 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:32.092770 2152780 cri.go:89] found id: ""
	I0819 13:21:32.092809 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.092818 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:32.092824 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:32.092880 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:32.127000 2152780 cri.go:89] found id: ""
	I0819 13:21:32.127035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.127043 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:32.127053 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:32.127067 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:32.179979 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:32.180026 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:32.195377 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:32.195412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:32.269864 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:32.269887 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:32.269902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:32.348035 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:32.348072 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:34.886647 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:34.900410 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:34.900483 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:34.938558 2152780 cri.go:89] found id: ""
	I0819 13:21:34.938588 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.938596 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:34.938603 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:34.938660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:34.975968 2152780 cri.go:89] found id: ""
	I0819 13:21:34.976001 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.976012 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:34.976020 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:34.976105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:35.008967 2152780 cri.go:89] found id: ""
	I0819 13:21:35.008995 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.009003 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:35.009010 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:35.009079 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:35.045918 2152780 cri.go:89] found id: ""
	I0819 13:21:35.045957 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.045966 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:35.045973 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:35.046038 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:35.090605 2152780 cri.go:89] found id: ""
	I0819 13:21:35.090640 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.090652 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:35.090667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:35.090740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:35.152032 2152780 cri.go:89] found id: ""
	I0819 13:21:35.152066 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.152079 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:35.152088 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:35.152182 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:35.197900 2152780 cri.go:89] found id: ""
	I0819 13:21:35.197932 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.197947 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:35.197956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:35.198026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:35.233818 2152780 cri.go:89] found id: ""
	I0819 13:21:35.233853 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.233867 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:35.233884 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:35.233902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:35.316457 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:35.316501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:35.356449 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:35.356480 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:35.409307 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:35.409347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:35.423776 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:35.423807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:35.500868 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:31.803587 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:33.803736 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.805282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:32.452623 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:34.454347 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.454772 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:35.454809 2151869 pod_ready.go:82] duration metric: took 14.007559429s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:35.454826 2151869 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:33.345607 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.347672 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.844185 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:38.001981 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:38.014908 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:38.014991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:38.051110 2152780 cri.go:89] found id: ""
	I0819 13:21:38.051141 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.051154 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:38.051162 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:38.051236 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:38.087476 2152780 cri.go:89] found id: ""
	I0819 13:21:38.087502 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.087511 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:38.087517 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:38.087575 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:38.122929 2152780 cri.go:89] found id: ""
	I0819 13:21:38.122961 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.122972 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:38.122980 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:38.123050 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:38.157352 2152780 cri.go:89] found id: ""
	I0819 13:21:38.157382 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.157394 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:38.157403 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:38.157468 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:38.194914 2152780 cri.go:89] found id: ""
	I0819 13:21:38.194953 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.194968 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:38.194974 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:38.195028 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:38.234036 2152780 cri.go:89] found id: ""
	I0819 13:21:38.234069 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.234078 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:38.234087 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:38.234155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:38.275759 2152780 cri.go:89] found id: ""
	I0819 13:21:38.275787 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.275798 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:38.275806 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:38.275867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:38.312718 2152780 cri.go:89] found id: ""
	I0819 13:21:38.312749 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.312759 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:38.312770 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:38.312785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:38.352032 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:38.352057 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:38.401877 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:38.401912 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:38.415899 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:38.415929 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:38.495576 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:38.495596 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:38.495609 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:38.304885 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.804828 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.461711 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:39.960883 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.343811 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.344371 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:41.073796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:41.095862 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:41.095935 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:41.155760 2152780 cri.go:89] found id: ""
	I0819 13:21:41.155790 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.155820 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:41.155828 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:41.155899 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:41.200108 2152780 cri.go:89] found id: ""
	I0819 13:21:41.200147 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.200177 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:41.200189 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:41.200263 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:41.233462 2152780 cri.go:89] found id: ""
	I0819 13:21:41.233494 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.233506 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:41.233514 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:41.233585 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:41.267089 2152780 cri.go:89] found id: ""
	I0819 13:21:41.267120 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.267133 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:41.267141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:41.267212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:41.302391 2152780 cri.go:89] found id: ""
	I0819 13:21:41.302423 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.302434 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:41.302443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:41.302509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:41.337812 2152780 cri.go:89] found id: ""
	I0819 13:21:41.337843 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.337855 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:41.337864 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:41.337939 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:41.373924 2152780 cri.go:89] found id: ""
	I0819 13:21:41.373954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.373961 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:41.373968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:41.374023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:41.407905 2152780 cri.go:89] found id: ""
	I0819 13:21:41.407950 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.407961 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:41.407974 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:41.407996 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:41.493650 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:41.493693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:41.532411 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:41.532456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:41.584613 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:41.584650 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:41.598337 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:41.598369 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:41.672154 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.172662 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:44.185663 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:44.185730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:44.220825 2152780 cri.go:89] found id: ""
	I0819 13:21:44.220863 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.220876 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:44.220885 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:44.220955 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:44.257261 2152780 cri.go:89] found id: ""
	I0819 13:21:44.257292 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.257300 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:44.257307 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:44.257369 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:44.292652 2152780 cri.go:89] found id: ""
	I0819 13:21:44.292690 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.292701 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:44.292709 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:44.292806 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:44.330064 2152780 cri.go:89] found id: ""
	I0819 13:21:44.330096 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.330108 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:44.330117 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:44.330184 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:44.366863 2152780 cri.go:89] found id: ""
	I0819 13:21:44.366895 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.366906 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:44.366915 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:44.366976 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:44.400335 2152780 cri.go:89] found id: ""
	I0819 13:21:44.400372 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.400385 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:44.400394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:44.400458 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:44.434381 2152780 cri.go:89] found id: ""
	I0819 13:21:44.434411 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.434422 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:44.434430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:44.434498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:44.470985 2152780 cri.go:89] found id: ""
	I0819 13:21:44.471013 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.471024 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:44.471036 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:44.471048 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:44.521552 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:44.521593 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:44.536197 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:44.536230 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:44.613040 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.613063 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:44.613077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:44.694600 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:44.694655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:42.807847 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:45.304916 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.461974 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.464590 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.843926 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.845069 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:47.231770 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:47.245480 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:47.245555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:47.282302 2152780 cri.go:89] found id: ""
	I0819 13:21:47.282334 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.282343 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:47.282351 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:47.282424 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:47.317791 2152780 cri.go:89] found id: ""
	I0819 13:21:47.317828 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.317839 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:47.317847 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:47.317914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:47.355999 2152780 cri.go:89] found id: ""
	I0819 13:21:47.356033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.356044 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:47.356052 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:47.356113 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:47.394652 2152780 cri.go:89] found id: ""
	I0819 13:21:47.394689 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.394702 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:47.394711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:47.394769 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:47.428563 2152780 cri.go:89] found id: ""
	I0819 13:21:47.428599 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.428612 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:47.428621 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:47.428698 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:47.462669 2152780 cri.go:89] found id: ""
	I0819 13:21:47.462702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.462712 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:47.462719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:47.462789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:47.496322 2152780 cri.go:89] found id: ""
	I0819 13:21:47.496349 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.496357 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:47.496366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:47.496419 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:47.527997 2152780 cri.go:89] found id: ""
	I0819 13:21:47.528033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.528045 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:47.528058 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:47.528077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:47.541514 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:47.541542 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:47.612804 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:47.612833 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:47.612851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.692815 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:47.692857 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:47.731057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:47.731092 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.286289 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:50.299376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:50.299457 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:50.335093 2152780 cri.go:89] found id: ""
	I0819 13:21:50.335123 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.335132 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:50.335138 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:50.335212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:50.369353 2152780 cri.go:89] found id: ""
	I0819 13:21:50.369378 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.369386 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:50.369394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:50.369459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:50.404319 2152780 cri.go:89] found id: ""
	I0819 13:21:50.404350 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.404360 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:50.404366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:50.404421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:50.441033 2152780 cri.go:89] found id: ""
	I0819 13:21:50.441064 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.441073 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:50.441080 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:50.441136 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:50.477438 2152780 cri.go:89] found id: ""
	I0819 13:21:50.477463 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.477471 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:50.477485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:50.477541 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:50.511785 2152780 cri.go:89] found id: ""
	I0819 13:21:50.511825 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.511838 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:50.511848 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:50.511925 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:50.545454 2152780 cri.go:89] found id: ""
	I0819 13:21:50.545487 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.545496 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:50.545502 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:50.545563 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:50.582022 2152780 cri.go:89] found id: ""
	I0819 13:21:50.582053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.582061 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:50.582071 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:50.582086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:50.625353 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:50.625384 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.688814 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:50.688851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:50.704225 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:50.704254 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:50.782736 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:50.782771 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:50.782789 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.804084 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.804223 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.961189 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:48.962466 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.344808 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.347615 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.365075 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:53.380049 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:53.380135 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:53.418911 2152780 cri.go:89] found id: ""
	I0819 13:21:53.418944 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.418956 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:53.418964 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:53.419020 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:53.458952 2152780 cri.go:89] found id: ""
	I0819 13:21:53.458986 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.458997 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:53.459005 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:53.459080 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:53.497219 2152780 cri.go:89] found id: ""
	I0819 13:21:53.497254 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.497265 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:53.497273 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:53.497360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:53.533624 2152780 cri.go:89] found id: ""
	I0819 13:21:53.533658 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.533671 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:53.533682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:53.533761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:53.567453 2152780 cri.go:89] found id: ""
	I0819 13:21:53.567490 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.567501 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:53.567509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:53.567574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:53.603438 2152780 cri.go:89] found id: ""
	I0819 13:21:53.603472 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.603482 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:53.603490 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:53.603570 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:53.637608 2152780 cri.go:89] found id: ""
	I0819 13:21:53.637648 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.637659 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:53.637667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:53.637742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:53.684476 2152780 cri.go:89] found id: ""
	I0819 13:21:53.684514 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.684524 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:53.684534 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:53.684550 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:53.699195 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:53.699227 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:53.766980 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:53.767000 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:53.767013 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:53.844890 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:53.844924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:53.888284 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:53.888318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:51.805011 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.805710 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.460867 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.462350 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.962100 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.845091 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.845602 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:57.848944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:56.442668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:56.455936 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:56.456019 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:56.494363 2152780 cri.go:89] found id: ""
	I0819 13:21:56.494403 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.494414 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:56.494423 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:56.494498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:56.531901 2152780 cri.go:89] found id: ""
	I0819 13:21:56.531933 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.531943 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:56.531950 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:56.532046 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:56.569677 2152780 cri.go:89] found id: ""
	I0819 13:21:56.569706 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.569715 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:56.569721 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:56.569787 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:56.608109 2152780 cri.go:89] found id: ""
	I0819 13:21:56.608136 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.608147 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:56.608153 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:56.608240 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:56.646753 2152780 cri.go:89] found id: ""
	I0819 13:21:56.646791 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.646803 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:56.646810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:56.646879 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:56.686029 2152780 cri.go:89] found id: ""
	I0819 13:21:56.686053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.686061 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:56.686067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:56.686131 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:56.724838 2152780 cri.go:89] found id: ""
	I0819 13:21:56.724874 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.724886 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:56.724895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:56.724963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:56.762368 2152780 cri.go:89] found id: ""
	I0819 13:21:56.762399 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.762407 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:56.762422 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:56.762434 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:56.815047 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:56.815103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:56.830783 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:56.830809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:56.908963 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:56.908990 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:56.909008 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:56.991337 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:56.991372 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:59.532005 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:59.545733 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:59.545823 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:59.584082 2152780 cri.go:89] found id: ""
	I0819 13:21:59.584111 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.584121 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:59.584130 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:59.584216 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:59.618041 2152780 cri.go:89] found id: ""
	I0819 13:21:59.618075 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.618086 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:59.618094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:59.618171 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:59.651720 2152780 cri.go:89] found id: ""
	I0819 13:21:59.651753 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.651764 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:59.651772 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:59.651844 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:59.686447 2152780 cri.go:89] found id: ""
	I0819 13:21:59.686471 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.686479 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:59.686486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:59.686539 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:59.719583 2152780 cri.go:89] found id: ""
	I0819 13:21:59.719623 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.719635 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:59.719643 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:59.719714 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:59.753559 2152780 cri.go:89] found id: ""
	I0819 13:21:59.753593 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.753604 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:59.753613 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:59.753684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:59.789518 2152780 cri.go:89] found id: ""
	I0819 13:21:59.789554 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.789566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:59.789575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:59.789647 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:59.825772 2152780 cri.go:89] found id: ""
	I0819 13:21:59.825801 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.825812 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:59.825823 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:59.825837 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:59.878857 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:59.878899 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:59.894113 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:59.894146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:59.975274 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:59.975301 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:59.975318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:00.056121 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:00.056185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:56.304106 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.804232 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.805308 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.461819 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.961828 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.345176 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.345756 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.596366 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:02.610365 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:02.610439 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:02.646703 2152780 cri.go:89] found id: ""
	I0819 13:22:02.646729 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.646737 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:02.646743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:02.646800 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:02.682311 2152780 cri.go:89] found id: ""
	I0819 13:22:02.682345 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.682357 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:02.682364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:02.682441 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:02.718494 2152780 cri.go:89] found id: ""
	I0819 13:22:02.718525 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.718545 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:02.718551 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:02.718608 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.751952 2152780 cri.go:89] found id: ""
	I0819 13:22:02.751988 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.751999 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:02.752008 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:02.752074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:02.786156 2152780 cri.go:89] found id: ""
	I0819 13:22:02.786189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.786197 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:02.786204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:02.786261 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:02.823080 2152780 cri.go:89] found id: ""
	I0819 13:22:02.823114 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.823126 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:02.823134 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:02.823207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:02.860266 2152780 cri.go:89] found id: ""
	I0819 13:22:02.860299 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.860311 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:02.860320 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:02.860377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:02.903612 2152780 cri.go:89] found id: ""
	I0819 13:22:02.903643 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.903652 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:02.903667 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:02.903680 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:02.984771 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:02.984813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:03.027346 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:03.027378 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:03.077562 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:03.077608 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:03.092135 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:03.092190 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:03.168418 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:05.669468 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:05.682640 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:05.682706 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:05.717755 2152780 cri.go:89] found id: ""
	I0819 13:22:05.717785 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.717796 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:05.717805 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:05.717867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:05.757165 2152780 cri.go:89] found id: ""
	I0819 13:22:05.757191 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.757199 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:05.757204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:05.757265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:05.794991 2152780 cri.go:89] found id: ""
	I0819 13:22:05.795023 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.795035 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:05.795044 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:05.795124 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.805919 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.304915 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.963297 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.459786 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:04.844594 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.344680 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.829634 2152780 cri.go:89] found id: ""
	I0819 13:22:05.829664 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.829673 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:05.829680 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:05.829734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:05.865739 2152780 cri.go:89] found id: ""
	I0819 13:22:05.865767 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.865775 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:05.865781 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:05.865839 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:05.901395 2152780 cri.go:89] found id: ""
	I0819 13:22:05.901425 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.901435 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:05.901443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:05.901641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:05.937162 2152780 cri.go:89] found id: ""
	I0819 13:22:05.937190 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.937200 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:05.937207 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:05.937284 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:05.974302 2152780 cri.go:89] found id: ""
	I0819 13:22:05.974332 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.974342 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:05.974353 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:05.974367 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:06.012635 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:06.012664 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:06.064093 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:06.064129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:06.077792 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:06.077825 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:06.148954 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:06.148980 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:06.148999 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:08.731539 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:08.747311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:08.747414 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:08.782881 2152780 cri.go:89] found id: ""
	I0819 13:22:08.782913 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.782921 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:08.782927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:08.782990 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:08.817533 2152780 cri.go:89] found id: ""
	I0819 13:22:08.817569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.817580 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:08.817587 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:08.817657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:08.855614 2152780 cri.go:89] found id: ""
	I0819 13:22:08.855650 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.855664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:08.855672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:08.855742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:08.896405 2152780 cri.go:89] found id: ""
	I0819 13:22:08.896439 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.896447 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:08.896455 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:08.896525 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:08.936250 2152780 cri.go:89] found id: ""
	I0819 13:22:08.936279 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.936288 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:08.936296 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:08.936366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:08.976325 2152780 cri.go:89] found id: ""
	I0819 13:22:08.976357 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.976367 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:08.976376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:08.976446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:09.012921 2152780 cri.go:89] found id: ""
	I0819 13:22:09.012953 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.012962 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:09.012967 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:09.013026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:09.051385 2152780 cri.go:89] found id: ""
	I0819 13:22:09.051420 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.051432 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:09.051444 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:09.051461 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:09.105101 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:09.105146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:09.120289 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:09.120331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:09.216182 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:09.216220 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:09.216238 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:09.302832 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:09.302876 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:07.307100 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.803522 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.461843 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.961972 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.345855 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.845869 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.848734 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:11.861754 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:11.861838 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:11.897908 2152780 cri.go:89] found id: ""
	I0819 13:22:11.897940 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.897949 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:11.897956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:11.898013 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:11.935276 2152780 cri.go:89] found id: ""
	I0819 13:22:11.935309 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.935318 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:11.935326 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:11.935386 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:11.978577 2152780 cri.go:89] found id: ""
	I0819 13:22:11.978608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.978619 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:11.978626 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:11.978697 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:12.018464 2152780 cri.go:89] found id: ""
	I0819 13:22:12.018496 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.018505 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:12.018511 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:12.018567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:12.062371 2152780 cri.go:89] found id: ""
	I0819 13:22:12.062402 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.062412 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:12.062418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:12.062476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:12.102988 2152780 cri.go:89] found id: ""
	I0819 13:22:12.103017 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.103026 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:12.103033 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:12.103088 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:12.143553 2152780 cri.go:89] found id: ""
	I0819 13:22:12.143581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.143591 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:12.143599 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:12.143660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:12.187155 2152780 cri.go:89] found id: ""
	I0819 13:22:12.187192 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.187211 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:12.187224 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:12.187251 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:12.236028 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:12.236069 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:12.250047 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:12.250084 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:12.320586 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:12.320612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:12.320624 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:12.400274 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:12.400320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:14.939735 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:14.953283 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:14.953348 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:14.993818 2152780 cri.go:89] found id: ""
	I0819 13:22:14.993851 2152780 logs.go:276] 0 containers: []
	W0819 13:22:14.993862 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:14.993870 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:14.993940 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:15.035966 2152780 cri.go:89] found id: ""
	I0819 13:22:15.035997 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.036005 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:15.036012 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:15.036073 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:15.070730 2152780 cri.go:89] found id: ""
	I0819 13:22:15.070765 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.070777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:15.070785 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:15.070855 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:15.111830 2152780 cri.go:89] found id: ""
	I0819 13:22:15.111859 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.111868 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:15.111874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:15.111928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:15.144013 2152780 cri.go:89] found id: ""
	I0819 13:22:15.144039 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.144047 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:15.144053 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:15.144107 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:15.178050 2152780 cri.go:89] found id: ""
	I0819 13:22:15.178074 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.178083 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:15.178090 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:15.178145 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:15.212622 2152780 cri.go:89] found id: ""
	I0819 13:22:15.212647 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.212656 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:15.212662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:15.212716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:15.244744 2152780 cri.go:89] found id: ""
	I0819 13:22:15.244772 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.244780 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:15.244789 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:15.244804 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:15.284232 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:15.284266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:15.336738 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:15.336774 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:15.350801 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:15.350835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:15.426424 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:15.426453 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:15.426469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:11.809160 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.303978 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.962746 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:13.964030 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.344847 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.344951 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.009087 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:18.022428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:18.022500 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:18.062900 2152780 cri.go:89] found id: ""
	I0819 13:22:18.062930 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.062941 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:18.062949 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:18.063021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:18.096711 2152780 cri.go:89] found id: ""
	I0819 13:22:18.096752 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.096764 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:18.096773 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:18.096846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:18.129199 2152780 cri.go:89] found id: ""
	I0819 13:22:18.129226 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.129246 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:18.129253 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:18.129323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:18.161907 2152780 cri.go:89] found id: ""
	I0819 13:22:18.161944 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.161955 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:18.161963 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:18.162036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:18.198514 2152780 cri.go:89] found id: ""
	I0819 13:22:18.198557 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.198570 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:18.198577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:18.198648 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:18.230114 2152780 cri.go:89] found id: ""
	I0819 13:22:18.230142 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.230152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:18.230160 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:18.230221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:18.265610 2152780 cri.go:89] found id: ""
	I0819 13:22:18.265642 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.265654 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:18.265660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:18.265729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:18.298889 2152780 cri.go:89] found id: ""
	I0819 13:22:18.298929 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.298941 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:18.298952 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:18.298967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:18.381124 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:18.381165 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:18.422361 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:18.422394 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:18.482182 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:18.482215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:18.496054 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:18.496086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:18.568459 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:16.305471 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.804592 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.461760 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.961000 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:20.961568 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.844928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.345851 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.069389 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:21.082568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:21.082636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:21.117124 2152780 cri.go:89] found id: ""
	I0819 13:22:21.117158 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.117171 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:21.117180 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:21.117272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:21.151954 2152780 cri.go:89] found id: ""
	I0819 13:22:21.151981 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.151990 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:21.151996 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:21.152047 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:21.187411 2152780 cri.go:89] found id: ""
	I0819 13:22:21.187443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.187452 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:21.187461 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:21.187528 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:21.224118 2152780 cri.go:89] found id: ""
	I0819 13:22:21.224148 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.224156 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:21.224173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:21.224247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:21.261153 2152780 cri.go:89] found id: ""
	I0819 13:22:21.261189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.261198 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:21.261205 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:21.261274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:21.305404 2152780 cri.go:89] found id: ""
	I0819 13:22:21.305433 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.305444 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:21.305453 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:21.305534 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:21.358428 2152780 cri.go:89] found id: ""
	I0819 13:22:21.358462 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.358474 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:21.358482 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:21.358561 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:21.410971 2152780 cri.go:89] found id: ""
	I0819 13:22:21.411005 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.411017 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:21.411029 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:21.411045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:21.462929 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:21.462963 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:21.476419 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:21.476446 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:21.544095 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.544116 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:21.544130 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:21.622802 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:21.622841 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.171988 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:24.185545 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:24.185624 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:24.221604 2152780 cri.go:89] found id: ""
	I0819 13:22:24.221632 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.221640 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:24.221646 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:24.221703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:24.255866 2152780 cri.go:89] found id: ""
	I0819 13:22:24.255903 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.255914 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:24.255923 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:24.255993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:24.290338 2152780 cri.go:89] found id: ""
	I0819 13:22:24.290381 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.290393 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:24.290401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:24.290465 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:24.325653 2152780 cri.go:89] found id: ""
	I0819 13:22:24.325685 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.325693 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:24.325700 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:24.325753 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:24.364081 2152780 cri.go:89] found id: ""
	I0819 13:22:24.364116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.364128 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:24.364136 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:24.364211 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:24.403464 2152780 cri.go:89] found id: ""
	I0819 13:22:24.403504 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.403516 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:24.403524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:24.403597 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:24.441205 2152780 cri.go:89] found id: ""
	I0819 13:22:24.441244 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.441256 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:24.441264 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:24.441340 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:24.478294 2152780 cri.go:89] found id: ""
	I0819 13:22:24.478324 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.478332 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:24.478341 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:24.478354 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:24.562969 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:24.563012 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.609296 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:24.609332 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:24.665537 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:24.665579 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:24.680086 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:24.680128 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:24.743465 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.305466 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.804623 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.461141 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.461784 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.844225 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.844485 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.844552 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.244137 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:27.257619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:27.257695 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:27.290229 2152780 cri.go:89] found id: ""
	I0819 13:22:27.290265 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.290274 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:27.290280 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:27.290354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:27.327185 2152780 cri.go:89] found id: ""
	I0819 13:22:27.327216 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.327225 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:27.327231 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:27.327302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:27.362949 2152780 cri.go:89] found id: ""
	I0819 13:22:27.362978 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.362999 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:27.363007 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:27.363074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:27.404085 2152780 cri.go:89] found id: ""
	I0819 13:22:27.404116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.404125 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:27.404132 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:27.404237 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:27.439884 2152780 cri.go:89] found id: ""
	I0819 13:22:27.439922 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.439933 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:27.439942 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:27.440006 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:27.473933 2152780 cri.go:89] found id: ""
	I0819 13:22:27.473961 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.473970 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:27.473976 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:27.474033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:27.510093 2152780 cri.go:89] found id: ""
	I0819 13:22:27.510121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.510131 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:27.510137 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:27.510188 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:27.549178 2152780 cri.go:89] found id: ""
	I0819 13:22:27.549205 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.549213 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:27.549223 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:27.549236 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:27.605886 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:27.605925 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:27.619506 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:27.619537 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:27.686821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:27.686849 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:27.686867 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:27.761275 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:27.761315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:30.301382 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:30.315140 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:30.315221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:30.352687 2152780 cri.go:89] found id: ""
	I0819 13:22:30.352710 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.352717 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:30.352723 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:30.352775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:30.388577 2152780 cri.go:89] found id: ""
	I0819 13:22:30.388604 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.388612 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:30.388618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:30.388671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:30.423937 2152780 cri.go:89] found id: ""
	I0819 13:22:30.423967 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.423977 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:30.423983 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:30.424039 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:30.456789 2152780 cri.go:89] found id: ""
	I0819 13:22:30.456823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.456834 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:30.456852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:30.456928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:30.495741 2152780 cri.go:89] found id: ""
	I0819 13:22:30.495778 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.495792 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:30.495801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:30.495856 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:30.532919 2152780 cri.go:89] found id: ""
	I0819 13:22:30.532947 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.532959 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:30.532972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:30.533036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:30.574556 2152780 cri.go:89] found id: ""
	I0819 13:22:30.574591 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.574602 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:30.574615 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:30.574685 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:30.610248 2152780 cri.go:89] found id: ""
	I0819 13:22:30.610280 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.610291 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:30.610303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:30.610319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:30.664302 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:30.664356 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:30.678529 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:30.678571 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:30.753412 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:30.753441 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:30.753460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:26.304403 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:28.305270 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.804282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.462817 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:29.463222 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.344268 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:32.344384 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.831754 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:30.831795 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:33.375305 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:33.390064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:33.390139 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:33.426250 2152780 cri.go:89] found id: ""
	I0819 13:22:33.426281 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.426292 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:33.426301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:33.426356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:33.464229 2152780 cri.go:89] found id: ""
	I0819 13:22:33.464255 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.464263 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:33.464270 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:33.464329 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:33.501098 2152780 cri.go:89] found id: ""
	I0819 13:22:33.501134 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.501145 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:33.501154 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:33.501223 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:33.535617 2152780 cri.go:89] found id: ""
	I0819 13:22:33.535651 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.535662 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:33.535670 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:33.535738 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:33.570553 2152780 cri.go:89] found id: ""
	I0819 13:22:33.570600 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.570613 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:33.570622 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:33.570694 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:33.605414 2152780 cri.go:89] found id: ""
	I0819 13:22:33.605454 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.605466 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:33.605477 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:33.605545 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:33.637564 2152780 cri.go:89] found id: ""
	I0819 13:22:33.637599 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.637610 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:33.637618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:33.637681 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:33.671403 2152780 cri.go:89] found id: ""
	I0819 13:22:33.671438 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.671450 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:33.671463 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:33.671482 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:33.724121 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:33.724159 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:33.737041 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:33.737070 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:33.808305 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:33.808329 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:33.808342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:33.886089 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:33.886136 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:32.806000 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:35.303503 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:31.463294 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:33.962798 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:34.344950 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.845547 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.424035 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:36.439329 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:36.439407 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:36.486348 2152780 cri.go:89] found id: ""
	I0819 13:22:36.486375 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.486383 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:36.486391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:36.486455 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:36.522796 2152780 cri.go:89] found id: ""
	I0819 13:22:36.522823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.522834 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:36.522843 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:36.522909 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:36.556528 2152780 cri.go:89] found id: ""
	I0819 13:22:36.556561 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.556573 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:36.556580 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:36.556652 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:36.592545 2152780 cri.go:89] found id: ""
	I0819 13:22:36.592581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.592591 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:36.592600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:36.592671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:36.628668 2152780 cri.go:89] found id: ""
	I0819 13:22:36.628696 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.628705 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:36.628711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:36.628765 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:36.660783 2152780 cri.go:89] found id: ""
	I0819 13:22:36.660817 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.660825 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:36.660832 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:36.660886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:36.694853 2152780 cri.go:89] found id: ""
	I0819 13:22:36.694883 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.694893 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:36.694901 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:36.694969 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:36.727767 2152780 cri.go:89] found id: ""
	I0819 13:22:36.727797 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.727805 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:36.727816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:36.727830 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:36.741032 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:36.741064 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:36.816015 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:36.816045 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:36.816062 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:36.896272 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:36.896319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:36.933895 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:36.933936 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:39.495808 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:39.511071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:39.511158 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:39.551412 2152780 cri.go:89] found id: ""
	I0819 13:22:39.551447 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.551458 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:39.551467 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:39.551523 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:39.588796 2152780 cri.go:89] found id: ""
	I0819 13:22:39.588827 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.588836 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:39.588842 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:39.588904 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:39.627115 2152780 cri.go:89] found id: ""
	I0819 13:22:39.627144 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.627153 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:39.627159 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:39.627222 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:39.662372 2152780 cri.go:89] found id: ""
	I0819 13:22:39.662405 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.662413 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:39.662420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:39.662478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:39.696239 2152780 cri.go:89] found id: ""
	I0819 13:22:39.696272 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.696282 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:39.696298 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:39.696363 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:39.731334 2152780 cri.go:89] found id: ""
	I0819 13:22:39.731372 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.731383 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:39.731391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:39.731454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:39.764368 2152780 cri.go:89] found id: ""
	I0819 13:22:39.764400 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.764412 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:39.764420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:39.764488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:39.798994 2152780 cri.go:89] found id: ""
	I0819 13:22:39.799026 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.799038 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:39.799050 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:39.799066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:39.812935 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:39.812965 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:39.882867 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:39.882898 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:39.882913 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:39.968067 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:39.968102 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:40.007122 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:40.007158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:37.304584 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.305417 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.462377 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:38.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.344771 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.355221 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:42.560811 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:42.574968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:42.575034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:42.609476 2152780 cri.go:89] found id: ""
	I0819 13:22:42.609516 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.609529 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:42.609537 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:42.609607 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:42.650763 2152780 cri.go:89] found id: ""
	I0819 13:22:42.650794 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.650804 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:42.650812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:42.650885 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:42.686330 2152780 cri.go:89] found id: ""
	I0819 13:22:42.686358 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.686369 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:42.686377 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:42.686446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:42.720937 2152780 cri.go:89] found id: ""
	I0819 13:22:42.720963 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.720971 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:42.720982 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:42.721036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:42.755079 2152780 cri.go:89] found id: ""
	I0819 13:22:42.755122 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.755132 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:42.755141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:42.755212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:42.787680 2152780 cri.go:89] found id: ""
	I0819 13:22:42.787715 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.787727 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:42.787738 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:42.787807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:42.824768 2152780 cri.go:89] found id: ""
	I0819 13:22:42.824799 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.824807 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:42.824813 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:42.824874 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:42.859902 2152780 cri.go:89] found id: ""
	I0819 13:22:42.859935 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.859946 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:42.859957 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:42.859972 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:42.911169 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:42.911215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:42.925108 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:42.925140 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:43.002506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:43.002557 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:43.002574 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:43.084408 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:43.084451 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:45.625360 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:45.638948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:45.639033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:45.675838 2152780 cri.go:89] found id: ""
	I0819 13:22:45.675874 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.675887 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:45.675895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:45.675971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:45.708726 2152780 cri.go:89] found id: ""
	I0819 13:22:45.708757 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.708768 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:45.708776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:45.708842 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:45.749704 2152780 cri.go:89] found id: ""
	I0819 13:22:45.749739 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.749752 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:45.749769 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:45.749835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:45.786426 2152780 cri.go:89] found id: ""
	I0819 13:22:45.786457 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.786469 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:45.786478 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:45.786547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:41.803829 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.804465 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.805120 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.462336 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.961582 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.845489 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.845943 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.822363 2152780 cri.go:89] found id: ""
	I0819 13:22:45.822395 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.822403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:45.822408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:45.822472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:45.856308 2152780 cri.go:89] found id: ""
	I0819 13:22:45.856337 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.856346 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:45.856353 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:45.856421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:45.893379 2152780 cri.go:89] found id: ""
	I0819 13:22:45.893417 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.893427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:45.893435 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:45.893503 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:45.929841 2152780 cri.go:89] found id: ""
	I0819 13:22:45.929868 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.929879 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:45.929890 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:45.929943 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:45.980392 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:45.980432 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:45.994781 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:45.994809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:46.065633 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:46.065658 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:46.065675 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:46.140938 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:46.140976 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.680055 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:48.692972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:48.693053 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:48.736198 2152780 cri.go:89] found id: ""
	I0819 13:22:48.736232 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.736244 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:48.736252 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:48.736332 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:48.775819 2152780 cri.go:89] found id: ""
	I0819 13:22:48.775855 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.775872 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:48.775881 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:48.775956 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:48.814090 2152780 cri.go:89] found id: ""
	I0819 13:22:48.814121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.814130 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:48.814139 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:48.814207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:48.849630 2152780 cri.go:89] found id: ""
	I0819 13:22:48.849656 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.849665 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:48.849671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:48.849730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:48.887945 2152780 cri.go:89] found id: ""
	I0819 13:22:48.887982 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.887995 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:48.888003 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:48.888072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:48.924105 2152780 cri.go:89] found id: ""
	I0819 13:22:48.924145 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.924158 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:48.924184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:48.924266 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:48.960481 2152780 cri.go:89] found id: ""
	I0819 13:22:48.960521 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.960532 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:48.960540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:48.960611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:48.998826 2152780 cri.go:89] found id: ""
	I0819 13:22:48.998856 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.998865 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:48.998874 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:48.998888 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:49.053064 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:49.053103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:49.067115 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:49.067154 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:49.146731 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:49.146753 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:49.146767 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:49.232228 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:49.232268 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.303393 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.305013 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:46.461905 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.961570 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.961743 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.344068 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.844551 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:51.775692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:51.789766 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:51.789848 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:51.830997 2152780 cri.go:89] found id: ""
	I0819 13:22:51.831029 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.831038 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:51.831045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:51.831105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:51.870122 2152780 cri.go:89] found id: ""
	I0819 13:22:51.870150 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.870160 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:51.870168 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:51.870247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:51.911170 2152780 cri.go:89] found id: ""
	I0819 13:22:51.911198 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.911206 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:51.911216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:51.911280 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:51.946159 2152780 cri.go:89] found id: ""
	I0819 13:22:51.946197 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.946218 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:51.946225 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:51.946300 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:51.981131 2152780 cri.go:89] found id: ""
	I0819 13:22:51.981160 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.981170 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:51.981176 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:51.981251 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:52.024417 2152780 cri.go:89] found id: ""
	I0819 13:22:52.024456 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.024477 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:52.024486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:52.024567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:52.062519 2152780 cri.go:89] found id: ""
	I0819 13:22:52.062552 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.062563 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:52.062571 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:52.062640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:52.106259 2152780 cri.go:89] found id: ""
	I0819 13:22:52.106291 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.106302 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:52.106314 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:52.106338 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:52.158681 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:52.158723 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:52.172489 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:52.172523 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:52.236862 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:52.236892 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:52.236910 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:52.319287 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:52.319331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:54.864061 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:54.878753 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:54.878834 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:54.913195 2152780 cri.go:89] found id: ""
	I0819 13:22:54.913225 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.913240 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:54.913247 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:54.913314 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:54.954867 2152780 cri.go:89] found id: ""
	I0819 13:22:54.954909 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.954922 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:54.954930 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:54.954995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:54.992570 2152780 cri.go:89] found id: ""
	I0819 13:22:54.992601 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.992610 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:54.992616 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:54.992683 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:55.027056 2152780 cri.go:89] found id: ""
	I0819 13:22:55.027090 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.027101 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:55.027108 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:55.027190 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:55.061941 2152780 cri.go:89] found id: ""
	I0819 13:22:55.061973 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.061985 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:55.061994 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:55.062061 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:55.100414 2152780 cri.go:89] found id: ""
	I0819 13:22:55.100443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.100455 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:55.100465 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:55.100535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:55.143797 2152780 cri.go:89] found id: ""
	I0819 13:22:55.143836 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.143848 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:55.143856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:55.143929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:55.187540 2152780 cri.go:89] found id: ""
	I0819 13:22:55.187569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.187577 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:55.187587 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:55.187600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:55.241075 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:55.241110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:55.254813 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:55.254846 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:55.325934 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:55.325955 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:55.325974 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:55.404725 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:55.404775 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:52.804423 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.303884 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:52.963783 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.461517 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:53.345242 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.346054 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.843535 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.949501 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:57.964664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:57.964730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:58.005114 2152780 cri.go:89] found id: ""
	I0819 13:22:58.005153 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.005164 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:58.005173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:58.005274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:58.038596 2152780 cri.go:89] found id: ""
	I0819 13:22:58.038629 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.038639 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:58.038644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:58.038700 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:58.081597 2152780 cri.go:89] found id: ""
	I0819 13:22:58.081631 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.081640 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:58.081650 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:58.081735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:58.120356 2152780 cri.go:89] found id: ""
	I0819 13:22:58.120389 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.120400 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:58.120408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:58.120478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:58.156313 2152780 cri.go:89] found id: ""
	I0819 13:22:58.156346 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.156358 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:58.156366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:58.156442 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:58.192198 2152780 cri.go:89] found id: ""
	I0819 13:22:58.192238 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.192249 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:58.192257 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:58.192335 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:58.226574 2152780 cri.go:89] found id: ""
	I0819 13:22:58.226608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.226622 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:58.226631 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:58.226705 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:58.260060 2152780 cri.go:89] found id: ""
	I0819 13:22:58.260088 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.260096 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:58.260106 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:58.260119 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:58.338242 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:58.338289 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:58.382245 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:58.382286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:58.436273 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:58.436314 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:58.452143 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:58.452215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:58.530133 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:57.305258 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.804142 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.462332 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.962480 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.844922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:01.030732 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:01.044083 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:01.044155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:01.083082 2152780 cri.go:89] found id: ""
	I0819 13:23:01.083114 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.083122 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:01.083129 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:01.083197 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:01.122645 2152780 cri.go:89] found id: ""
	I0819 13:23:01.122680 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.122700 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:01.122708 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:01.122777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:01.163144 2152780 cri.go:89] found id: ""
	I0819 13:23:01.163175 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.163187 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:01.163194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:01.163262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:01.201545 2152780 cri.go:89] found id: ""
	I0819 13:23:01.201587 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.201597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:01.201605 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:01.201693 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:01.237047 2152780 cri.go:89] found id: ""
	I0819 13:23:01.237075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.237086 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:01.237094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:01.237162 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:01.276531 2152780 cri.go:89] found id: ""
	I0819 13:23:01.276560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.276571 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:01.276577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:01.276633 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:01.315639 2152780 cri.go:89] found id: ""
	I0819 13:23:01.315668 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.315676 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:01.315682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:01.315734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:01.351942 2152780 cri.go:89] found id: ""
	I0819 13:23:01.351969 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.351980 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:01.351990 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:01.352006 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:01.403068 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:01.403111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:01.416608 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:01.416639 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:01.493450 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:01.493483 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:01.493501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:01.577709 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:01.577750 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:04.146865 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:04.160366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:04.160431 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:04.194463 2152780 cri.go:89] found id: ""
	I0819 13:23:04.194494 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.194504 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:04.194510 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:04.194567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:04.228327 2152780 cri.go:89] found id: ""
	I0819 13:23:04.228351 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.228359 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:04.228364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:04.228413 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:04.261613 2152780 cri.go:89] found id: ""
	I0819 13:23:04.261645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.261656 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:04.261664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:04.261740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:04.294583 2152780 cri.go:89] found id: ""
	I0819 13:23:04.294617 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.294628 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:04.294636 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:04.294712 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:04.328537 2152780 cri.go:89] found id: ""
	I0819 13:23:04.328566 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.328576 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:04.328585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:04.328657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:04.361865 2152780 cri.go:89] found id: ""
	I0819 13:23:04.361906 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.361917 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:04.361925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:04.361988 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:04.396308 2152780 cri.go:89] found id: ""
	I0819 13:23:04.396343 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.396354 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:04.396361 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:04.396433 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:04.429551 2152780 cri.go:89] found id: ""
	I0819 13:23:04.429580 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.429591 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:04.429602 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:04.429618 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:04.479557 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:04.479600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:04.493746 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:04.493781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:04.563309 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:04.563333 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:04.563347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:04.644413 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:04.644455 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:02.305047 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.804564 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.460986 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.461977 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.845028 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.345778 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.195678 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:07.209255 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:07.209323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:07.245267 2152780 cri.go:89] found id: ""
	I0819 13:23:07.245295 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.245306 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:07.245324 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:07.245394 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:07.283190 2152780 cri.go:89] found id: ""
	I0819 13:23:07.283223 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.283234 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:07.283242 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:07.283312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:07.330025 2152780 cri.go:89] found id: ""
	I0819 13:23:07.330053 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.330065 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:07.330072 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:07.330137 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:07.381649 2152780 cri.go:89] found id: ""
	I0819 13:23:07.381675 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.381684 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:07.381690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:07.381744 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:07.436759 2152780 cri.go:89] found id: ""
	I0819 13:23:07.436794 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.436806 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:07.436814 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:07.436890 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:07.475843 2152780 cri.go:89] found id: ""
	I0819 13:23:07.475876 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.475885 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:07.475891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:07.475954 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:07.510531 2152780 cri.go:89] found id: ""
	I0819 13:23:07.510560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.510569 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:07.510575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:07.510641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:07.546041 2152780 cri.go:89] found id: ""
	I0819 13:23:07.546075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.546087 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:07.546100 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:07.546118 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:07.559102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:07.559137 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:07.631002 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:07.631030 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:07.631045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:07.705711 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:07.705751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:07.745114 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:07.745148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.302515 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:10.316028 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:10.316108 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:10.357431 2152780 cri.go:89] found id: ""
	I0819 13:23:10.357468 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.357477 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:10.357483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:10.357546 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:10.393541 2152780 cri.go:89] found id: ""
	I0819 13:23:10.393568 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.393577 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:10.393583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:10.393651 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:10.429651 2152780 cri.go:89] found id: ""
	I0819 13:23:10.429684 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.429692 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:10.429698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:10.429752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:10.469626 2152780 cri.go:89] found id: ""
	I0819 13:23:10.469690 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.469701 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:10.469706 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:10.469761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:10.503015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.503048 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.503058 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:10.503065 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:10.503133 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:10.538127 2152780 cri.go:89] found id: ""
	I0819 13:23:10.538168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.538180 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:10.538188 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:10.538260 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:10.573015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.573050 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.573058 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:10.573064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:10.573134 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:10.608094 2152780 cri.go:89] found id: ""
	I0819 13:23:10.608125 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.608136 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:10.608150 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:10.608185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:10.688926 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:10.688967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:10.724554 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:10.724588 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.777619 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:10.777652 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:10.791106 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:10.791143 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:23:07.305198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.804759 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:06.961224 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.460095 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.843726 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.845072 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	W0819 13:23:10.865964 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.366960 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:13.381903 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:13.381973 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:13.420589 2152780 cri.go:89] found id: ""
	I0819 13:23:13.420620 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.420630 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:13.420637 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:13.420724 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:13.457809 2152780 cri.go:89] found id: ""
	I0819 13:23:13.457840 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.457850 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:13.457863 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:13.457927 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:13.496310 2152780 cri.go:89] found id: ""
	I0819 13:23:13.496347 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.496359 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:13.496368 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:13.496436 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:13.532057 2152780 cri.go:89] found id: ""
	I0819 13:23:13.532089 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.532097 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:13.532103 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:13.532183 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:13.575780 2152780 cri.go:89] found id: ""
	I0819 13:23:13.575814 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.575823 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:13.575829 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:13.575896 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:13.611484 2152780 cri.go:89] found id: ""
	I0819 13:23:13.611510 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.611520 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:13.611528 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:13.611594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:13.647456 2152780 cri.go:89] found id: ""
	I0819 13:23:13.647483 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.647491 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:13.647497 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:13.647569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:13.681181 2152780 cri.go:89] found id: ""
	I0819 13:23:13.681212 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.681224 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:13.681237 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:13.681266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:13.694680 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:13.694709 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:13.767546 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.767576 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:13.767597 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:13.851609 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:13.851649 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:13.893739 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:13.893778 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:12.304391 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.304840 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.962978 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.461299 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.344830 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.844720 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.446686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:16.459850 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:16.459928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:16.498671 2152780 cri.go:89] found id: ""
	I0819 13:23:16.498703 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.498712 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:16.498718 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:16.498775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:16.534655 2152780 cri.go:89] found id: ""
	I0819 13:23:16.534691 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.534704 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:16.534716 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:16.534781 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:16.571279 2152780 cri.go:89] found id: ""
	I0819 13:23:16.571317 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.571330 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:16.571337 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:16.571402 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:16.604843 2152780 cri.go:89] found id: ""
	I0819 13:23:16.604878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.604889 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:16.604897 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:16.604963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:16.641150 2152780 cri.go:89] found id: ""
	I0819 13:23:16.641181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.641190 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:16.641201 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:16.641272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:16.676737 2152780 cri.go:89] found id: ""
	I0819 13:23:16.676763 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.676773 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:16.676782 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:16.676847 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:16.709876 2152780 cri.go:89] found id: ""
	I0819 13:23:16.709908 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.709920 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:16.709928 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:16.709991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:16.745249 2152780 cri.go:89] found id: ""
	I0819 13:23:16.745277 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.745293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:16.745304 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:16.745320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:16.796364 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:16.796400 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:16.811859 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:16.811887 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:16.886118 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:16.886144 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:16.886158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.968304 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:16.968349 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.511448 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:19.523952 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:19.524023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:19.556661 2152780 cri.go:89] found id: ""
	I0819 13:23:19.556696 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.556708 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:19.556715 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:19.556784 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:19.595956 2152780 cri.go:89] found id: ""
	I0819 13:23:19.595986 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.595995 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:19.596001 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:19.596055 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:19.631993 2152780 cri.go:89] found id: ""
	I0819 13:23:19.632021 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.632030 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:19.632038 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:19.632126 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:19.667871 2152780 cri.go:89] found id: ""
	I0819 13:23:19.667924 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.667935 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:19.667944 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:19.668007 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:19.703585 2152780 cri.go:89] found id: ""
	I0819 13:23:19.703615 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.703624 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:19.703630 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:19.703684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:19.737220 2152780 cri.go:89] found id: ""
	I0819 13:23:19.737252 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.737263 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:19.737272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:19.737343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:19.775526 2152780 cri.go:89] found id: ""
	I0819 13:23:19.775570 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.775587 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:19.775595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:19.775671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:19.810611 2152780 cri.go:89] found id: ""
	I0819 13:23:19.810645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.810656 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:19.810667 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:19.810682 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.855138 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:19.855167 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:19.906988 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:19.907025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:19.920451 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:19.920481 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:19.989054 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:19.989086 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:19.989101 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.803686 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.804090 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.804422 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.463889 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.961085 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.961248 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.845050 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:21.344704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.571624 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:22.585585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:22.585667 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:22.620456 2152780 cri.go:89] found id: ""
	I0819 13:23:22.620488 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.620497 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:22.620509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:22.620562 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:22.658971 2152780 cri.go:89] found id: ""
	I0819 13:23:22.659005 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.659017 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:22.659024 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:22.659094 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:22.696236 2152780 cri.go:89] found id: ""
	I0819 13:23:22.696269 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.696281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:22.696290 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:22.696354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:22.735260 2152780 cri.go:89] found id: ""
	I0819 13:23:22.735296 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.735306 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:22.735313 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:22.735365 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.770593 2152780 cri.go:89] found id: ""
	I0819 13:23:22.770625 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.770637 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:22.770644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:22.770716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:22.806348 2152780 cri.go:89] found id: ""
	I0819 13:23:22.806375 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.806400 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:22.806418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:22.806489 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:22.843812 2152780 cri.go:89] found id: ""
	I0819 13:23:22.843841 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.843850 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:22.843858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:22.843929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:22.880607 2152780 cri.go:89] found id: ""
	I0819 13:23:22.880641 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.880655 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:22.880669 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:22.880695 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:22.930816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:22.930855 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:22.946795 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:22.946829 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:23.021065 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:23.021094 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:23.021110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:23.098083 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:23.098129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:25.641104 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:25.654226 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:25.654297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:25.689403 2152780 cri.go:89] found id: ""
	I0819 13:23:25.689429 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.689437 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:25.689445 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:25.689496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:25.725340 2152780 cri.go:89] found id: ""
	I0819 13:23:25.725380 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.725393 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:25.725401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:25.725472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:25.759133 2152780 cri.go:89] found id: ""
	I0819 13:23:25.759168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.759178 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:25.759184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:25.759253 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:25.793605 2152780 cri.go:89] found id: ""
	I0819 13:23:25.793636 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.793649 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:25.793657 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:25.793731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.804810 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.305119 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.460663 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:23.344790 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.344975 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.843772 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.828355 2152780 cri.go:89] found id: ""
	I0819 13:23:25.828391 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.828403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:25.828411 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:25.828488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:25.869621 2152780 cri.go:89] found id: ""
	I0819 13:23:25.869658 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.869669 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:25.869676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:25.869731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:25.907749 2152780 cri.go:89] found id: ""
	I0819 13:23:25.907786 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.907795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:25.907801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:25.907870 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:25.942443 2152780 cri.go:89] found id: ""
	I0819 13:23:25.942472 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.942484 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:25.942497 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:25.942515 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:26.016016 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:26.016044 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:26.016058 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:26.097427 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:26.097475 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:26.140275 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:26.140320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:26.189858 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:26.189894 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:28.705027 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:28.717999 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:28.718071 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:28.752233 2152780 cri.go:89] found id: ""
	I0819 13:23:28.752263 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.752279 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:28.752286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:28.752343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:28.786033 2152780 cri.go:89] found id: ""
	I0819 13:23:28.786071 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.786080 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:28.786086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:28.786143 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:28.821105 2152780 cri.go:89] found id: ""
	I0819 13:23:28.821135 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.821165 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:28.821173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:28.821243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:28.859179 2152780 cri.go:89] found id: ""
	I0819 13:23:28.859226 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.859237 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:28.859244 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:28.859319 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:28.894213 2152780 cri.go:89] found id: ""
	I0819 13:23:28.894250 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.894263 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:28.894272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:28.894351 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:28.927807 2152780 cri.go:89] found id: ""
	I0819 13:23:28.927836 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.927848 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:28.927856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:28.927924 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:28.965430 2152780 cri.go:89] found id: ""
	I0819 13:23:28.965464 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.965476 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:28.965485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:28.965555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:29.008894 2152780 cri.go:89] found id: ""
	I0819 13:23:29.008920 2152780 logs.go:276] 0 containers: []
	W0819 13:23:29.008928 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:29.008938 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:29.008951 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:29.089617 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:29.089644 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:29.089657 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:29.173271 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:29.173310 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:29.216057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:29.216098 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:29.267076 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:29.267131 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:27.804872 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:30.304909 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.461788 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.961548 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.844049 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.846007 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.782574 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:31.797583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:31.797655 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:31.836889 2152780 cri.go:89] found id: ""
	I0819 13:23:31.836926 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.836939 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:31.836948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:31.837021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:31.875693 2152780 cri.go:89] found id: ""
	I0819 13:23:31.875725 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.875734 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:31.875740 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:31.875813 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:31.909847 2152780 cri.go:89] found id: ""
	I0819 13:23:31.909878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.909887 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:31.909895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:31.909971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:31.944122 2152780 cri.go:89] found id: ""
	I0819 13:23:31.944155 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.944184 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:31.944193 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:31.944265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:31.979342 2152780 cri.go:89] found id: ""
	I0819 13:23:31.979373 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.979383 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:31.979390 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:31.979461 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:32.013001 2152780 cri.go:89] found id: ""
	I0819 13:23:32.013049 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.013060 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:32.013067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:32.013148 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:32.047975 2152780 cri.go:89] found id: ""
	I0819 13:23:32.048003 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.048012 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:32.048017 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:32.048074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:32.082151 2152780 cri.go:89] found id: ""
	I0819 13:23:32.082181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.082192 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:32.082205 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:32.082241 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:32.120254 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:32.120286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.171594 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:32.171634 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:32.185654 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:32.185687 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:32.251542 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:32.251568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:32.251586 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:34.831302 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:34.846902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:34.846980 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:34.883697 2152780 cri.go:89] found id: ""
	I0819 13:23:34.883731 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.883747 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:34.883756 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:34.883821 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:34.918184 2152780 cri.go:89] found id: ""
	I0819 13:23:34.918220 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.918232 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:34.918240 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:34.918321 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:34.951621 2152780 cri.go:89] found id: ""
	I0819 13:23:34.951653 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.951664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:34.951673 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:34.951735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:34.987330 2152780 cri.go:89] found id: ""
	I0819 13:23:34.987362 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.987373 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:34.987381 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:34.987454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:35.022480 2152780 cri.go:89] found id: ""
	I0819 13:23:35.022518 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.022531 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:35.022539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:35.022606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:35.059717 2152780 cri.go:89] found id: ""
	I0819 13:23:35.059753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.059769 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:35.059776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:35.059835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:35.098877 2152780 cri.go:89] found id: ""
	I0819 13:23:35.098912 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.098924 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:35.098932 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:35.098995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:35.137392 2152780 cri.go:89] found id: ""
	I0819 13:23:35.137422 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.137429 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:35.137438 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:35.137456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:35.151134 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:35.151177 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:35.216821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:35.216852 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:35.216870 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:35.300662 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:35.300717 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:35.344627 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:35.344661 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.803063 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.804202 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.962287 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.460360 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.344876 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.345734 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:37.899152 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:37.913091 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:37.913161 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:37.949684 2152780 cri.go:89] found id: ""
	I0819 13:23:37.949713 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.949722 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:37.949728 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:37.949791 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:37.984480 2152780 cri.go:89] found id: ""
	I0819 13:23:37.984509 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.984518 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:37.984524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:37.984591 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:38.022228 2152780 cri.go:89] found id: ""
	I0819 13:23:38.022264 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.022277 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:38.022285 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:38.022360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:38.056117 2152780 cri.go:89] found id: ""
	I0819 13:23:38.056147 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.056159 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:38.056184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:38.056252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:38.089157 2152780 cri.go:89] found id: ""
	I0819 13:23:38.089187 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.089204 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:38.089212 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:38.089279 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:38.123335 2152780 cri.go:89] found id: ""
	I0819 13:23:38.123369 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.123378 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:38.123384 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:38.123444 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:38.159555 2152780 cri.go:89] found id: ""
	I0819 13:23:38.159584 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.159596 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:38.159604 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:38.159671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:38.200728 2152780 cri.go:89] found id: ""
	I0819 13:23:38.200765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.200777 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:38.200789 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:38.200809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:38.251652 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:38.251684 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:38.264834 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:38.264862 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:38.341259 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:38.341287 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:38.341304 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:38.417381 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:38.417423 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:36.805360 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:39.303731 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.461458 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.962012 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.347922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.845195 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.959227 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:40.972845 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:40.972937 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:41.009180 2152780 cri.go:89] found id: ""
	I0819 13:23:41.009219 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.009233 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:41.009241 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:41.009313 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:41.044055 2152780 cri.go:89] found id: ""
	I0819 13:23:41.044088 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.044100 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:41.044109 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:41.044203 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:41.081073 2152780 cri.go:89] found id: ""
	I0819 13:23:41.081098 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.081108 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:41.081114 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:41.081166 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:41.113973 2152780 cri.go:89] found id: ""
	I0819 13:23:41.114011 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.114024 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:41.114031 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:41.114105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:41.149480 2152780 cri.go:89] found id: ""
	I0819 13:23:41.149515 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.149527 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:41.149539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:41.149632 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:41.183756 2152780 cri.go:89] found id: ""
	I0819 13:23:41.183788 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.183800 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:41.183808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:41.183884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:41.217629 2152780 cri.go:89] found id: ""
	I0819 13:23:41.217670 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.217683 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:41.217691 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:41.217755 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:41.254948 2152780 cri.go:89] found id: ""
	I0819 13:23:41.254987 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.255000 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:41.255011 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:41.255025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:41.293080 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:41.293116 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.349391 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:41.349437 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:41.363720 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:41.363751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:41.459273 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:41.459299 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:41.459315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.043239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:44.057981 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:44.058072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:44.091216 2152780 cri.go:89] found id: ""
	I0819 13:23:44.091247 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.091256 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:44.091263 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:44.091336 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:44.128729 2152780 cri.go:89] found id: ""
	I0819 13:23:44.128761 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.128772 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:44.128779 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:44.128865 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:44.167730 2152780 cri.go:89] found id: ""
	I0819 13:23:44.167765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.167777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:44.167784 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:44.167852 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:44.202323 2152780 cri.go:89] found id: ""
	I0819 13:23:44.202355 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.202366 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:44.202373 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:44.202451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:44.237030 2152780 cri.go:89] found id: ""
	I0819 13:23:44.237074 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.237088 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:44.237096 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:44.237164 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:44.273319 2152780 cri.go:89] found id: ""
	I0819 13:23:44.273356 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.273370 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:44.273378 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:44.273459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:44.313267 2152780 cri.go:89] found id: ""
	I0819 13:23:44.313300 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.313313 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:44.313322 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:44.313401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:44.353073 2152780 cri.go:89] found id: ""
	I0819 13:23:44.353099 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.353107 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:44.353115 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:44.353129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:44.366102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:44.366147 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:44.438580 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:44.438612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:44.438628 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.518200 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:44.518250 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:44.559362 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:44.559396 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.308823 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.804769 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:41.461933 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.961713 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.344606 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:45.344727 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.345821 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.114604 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:47.128989 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:47.129067 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:47.174286 2152780 cri.go:89] found id: ""
	I0819 13:23:47.174316 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.174324 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:47.174331 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:47.174391 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:47.208244 2152780 cri.go:89] found id: ""
	I0819 13:23:47.208282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.208291 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:47.208297 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:47.208360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:47.245609 2152780 cri.go:89] found id: ""
	I0819 13:23:47.245638 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.245645 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:47.245651 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:47.245703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:47.280617 2152780 cri.go:89] found id: ""
	I0819 13:23:47.280646 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.280655 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:47.280661 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:47.280726 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:47.317662 2152780 cri.go:89] found id: ""
	I0819 13:23:47.317695 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.317708 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:47.317719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:47.317789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:47.359113 2152780 cri.go:89] found id: ""
	I0819 13:23:47.359143 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.359152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:47.359158 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:47.359243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:47.396520 2152780 cri.go:89] found id: ""
	I0819 13:23:47.396550 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.396561 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:47.396568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:47.396636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:47.433043 2152780 cri.go:89] found id: ""
	I0819 13:23:47.433075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.433086 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:47.433097 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:47.433111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:47.491984 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:47.492023 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:47.505775 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:47.505805 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:47.573541 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:47.573568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:47.573584 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:47.657102 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:47.657148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.201855 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:50.216186 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:50.216250 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:50.254505 2152780 cri.go:89] found id: ""
	I0819 13:23:50.254542 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.254554 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:50.254561 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:50.254627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:50.288971 2152780 cri.go:89] found id: ""
	I0819 13:23:50.289006 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.289015 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:50.289021 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:50.289086 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:50.325243 2152780 cri.go:89] found id: ""
	I0819 13:23:50.325282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.325293 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:50.325302 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:50.325377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:50.361776 2152780 cri.go:89] found id: ""
	I0819 13:23:50.361810 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.361822 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:50.361830 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:50.361892 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:50.395380 2152780 cri.go:89] found id: ""
	I0819 13:23:50.395412 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.395422 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:50.395428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:50.395496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:50.429299 2152780 cri.go:89] found id: ""
	I0819 13:23:50.429336 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.429348 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:50.429356 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:50.429423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:50.471214 2152780 cri.go:89] found id: ""
	I0819 13:23:50.471236 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.471243 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:50.471249 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:50.471302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:50.510258 2152780 cri.go:89] found id: ""
	I0819 13:23:50.510298 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.510309 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:50.510322 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:50.510342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.547398 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:50.547431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:50.598425 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:50.598466 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:50.613308 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:50.613341 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:50.685803 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:50.685834 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:50.685851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:46.303155 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.305240 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.804319 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:46.461162 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.961356 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.961557 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:49.844368 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.345173 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:53.264007 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:53.277505 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:53.277596 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:53.320407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.320436 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.320444 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:53.320450 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:53.320508 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:53.364578 2152780 cri.go:89] found id: ""
	I0819 13:23:53.364628 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.364650 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:53.364660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:53.364732 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:53.404407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.404437 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.404446 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:53.404452 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:53.404522 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:53.440743 2152780 cri.go:89] found id: ""
	I0819 13:23:53.440775 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.440784 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:53.440791 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:53.440858 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:53.476151 2152780 cri.go:89] found id: ""
	I0819 13:23:53.476200 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.476211 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:53.476217 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:53.476297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:53.512531 2152780 cri.go:89] found id: ""
	I0819 13:23:53.512559 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.512567 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:53.512573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:53.512640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:53.562487 2152780 cri.go:89] found id: ""
	I0819 13:23:53.562524 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.562535 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:53.562543 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:53.562617 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:53.627507 2152780 cri.go:89] found id: ""
	I0819 13:23:53.627545 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.627556 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:53.627566 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:53.627587 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:53.686224 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:53.686270 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:53.700659 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:53.700693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:53.776506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:53.776544 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:53.776564 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:53.855375 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:53.855412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:53.305180 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.803826 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.962161 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:54.345531 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.845217 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.398105 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:56.411245 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:56.411312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:56.444097 2152780 cri.go:89] found id: ""
	I0819 13:23:56.444129 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.444138 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:56.444144 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:56.444214 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:56.480958 2152780 cri.go:89] found id: ""
	I0819 13:23:56.480993 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.481004 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:56.481013 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:56.481077 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:56.514657 2152780 cri.go:89] found id: ""
	I0819 13:23:56.514687 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.514696 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:56.514707 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:56.514762 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:56.548615 2152780 cri.go:89] found id: ""
	I0819 13:23:56.548647 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.548659 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:56.548667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:56.548735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:56.585801 2152780 cri.go:89] found id: ""
	I0819 13:23:56.585833 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.585844 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:56.585852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:56.585916 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:56.617804 2152780 cri.go:89] found id: ""
	I0819 13:23:56.617832 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.617843 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:56.617851 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:56.617919 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:56.655726 2152780 cri.go:89] found id: ""
	I0819 13:23:56.655753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.655761 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:56.655768 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:56.655820 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:56.691057 2152780 cri.go:89] found id: ""
	I0819 13:23:56.691087 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.691099 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:56.691111 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:56.691127 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:56.758836 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:56.758860 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:56.758879 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:56.836388 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:56.836427 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:56.875753 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:56.875781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:56.928126 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:56.928184 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.442721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:59.456581 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:59.456742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:59.492452 2152780 cri.go:89] found id: ""
	I0819 13:23:59.492480 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.492488 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:59.492494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:59.492544 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:59.527258 2152780 cri.go:89] found id: ""
	I0819 13:23:59.527291 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.527301 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:59.527311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:59.527366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:59.562712 2152780 cri.go:89] found id: ""
	I0819 13:23:59.562744 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.562757 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:59.562765 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:59.562836 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:59.597522 2152780 cri.go:89] found id: ""
	I0819 13:23:59.597557 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.597568 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:59.597577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:59.597649 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:59.638820 2152780 cri.go:89] found id: ""
	I0819 13:23:59.638852 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.638865 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:59.638873 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:59.638941 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:59.682623 2152780 cri.go:89] found id: ""
	I0819 13:23:59.682656 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.682667 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:59.682676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:59.682746 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:59.718731 2152780 cri.go:89] found id: ""
	I0819 13:23:59.718782 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.718795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:59.718808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:59.718881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:59.754799 2152780 cri.go:89] found id: ""
	I0819 13:23:59.754830 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.754840 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:59.754866 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:59.754890 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:59.807650 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:59.807683 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.821699 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:59.821727 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:59.901940 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:59.901962 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:59.901975 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:59.979019 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:59.979055 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:57.804198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:00.303807 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:57.960616 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.961190 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.344403 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:01.344800 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.519279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:02.532541 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:02.532606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:02.566287 2152780 cri.go:89] found id: ""
	I0819 13:24:02.566327 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.566338 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:02.566347 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:02.566423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:02.599874 2152780 cri.go:89] found id: ""
	I0819 13:24:02.599907 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.599920 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:02.599927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:02.600000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:02.633609 2152780 cri.go:89] found id: ""
	I0819 13:24:02.633642 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.633662 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:02.633671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:02.633741 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:02.669105 2152780 cri.go:89] found id: ""
	I0819 13:24:02.669133 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.669142 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:02.669151 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:02.669245 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:02.704719 2152780 cri.go:89] found id: ""
	I0819 13:24:02.704756 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.704769 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:02.704777 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:02.704846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:02.744694 2152780 cri.go:89] found id: ""
	I0819 13:24:02.744726 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.744736 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:02.744743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:02.744818 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.779911 2152780 cri.go:89] found id: ""
	I0819 13:24:02.779945 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.779959 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:02.779969 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:02.780034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:02.815235 2152780 cri.go:89] found id: ""
	I0819 13:24:02.815277 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.815293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:02.815303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:02.815319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:02.864822 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:02.864852 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:02.879626 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:02.879655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:02.951475 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:02.951504 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:02.951522 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:03.031154 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:03.031196 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.573907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:05.586737 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:05.586798 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:05.623912 2152780 cri.go:89] found id: ""
	I0819 13:24:05.623946 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.623957 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:05.623965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:05.624037 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:05.659399 2152780 cri.go:89] found id: ""
	I0819 13:24:05.659427 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.659435 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:05.659441 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:05.659494 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:05.693173 2152780 cri.go:89] found id: ""
	I0819 13:24:05.693207 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.693219 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:05.693228 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:05.693289 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:05.725641 2152780 cri.go:89] found id: ""
	I0819 13:24:05.725670 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.725680 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:05.725688 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:05.725752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:05.759559 2152780 cri.go:89] found id: ""
	I0819 13:24:05.759593 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.759606 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:05.759614 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:05.759686 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:05.793022 2152780 cri.go:89] found id: ""
	I0819 13:24:05.793051 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.793063 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:05.793071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:05.793147 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.304309 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.304491 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.461249 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.962079 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:03.344928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.845172 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.828997 2152780 cri.go:89] found id: ""
	I0819 13:24:05.829027 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.829037 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:05.829045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:05.829111 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:05.862852 2152780 cri.go:89] found id: ""
	I0819 13:24:05.862885 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.862895 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:05.862908 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:05.862924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.901295 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:05.901334 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:05.956106 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:05.956146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:05.971498 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:05.971525 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:06.040630 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:06.040653 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:06.040670 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:08.617393 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:08.631902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:08.631971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:08.669456 2152780 cri.go:89] found id: ""
	I0819 13:24:08.669512 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.669526 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:08.669539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:08.669630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:08.704825 2152780 cri.go:89] found id: ""
	I0819 13:24:08.704857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.704866 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:08.704874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:08.704933 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:08.738841 2152780 cri.go:89] found id: ""
	I0819 13:24:08.738879 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.738893 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:08.738902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:08.738961 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:08.775865 2152780 cri.go:89] found id: ""
	I0819 13:24:08.775899 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.775909 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:08.775918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:08.775981 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:08.813452 2152780 cri.go:89] found id: ""
	I0819 13:24:08.813482 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.813493 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:08.813501 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:08.813572 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:08.850638 2152780 cri.go:89] found id: ""
	I0819 13:24:08.850669 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.850681 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:08.850690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:08.850758 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:08.886750 2152780 cri.go:89] found id: ""
	I0819 13:24:08.886789 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.886801 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:08.886812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:08.886881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:08.922112 2152780 cri.go:89] found id: ""
	I0819 13:24:08.922145 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.922154 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:08.922164 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:08.922181 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:08.997014 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:08.997042 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:08.997059 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:09.074338 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:09.074388 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:09.114430 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:09.114460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:09.169581 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:09.169620 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:06.804761 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.809429 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:07.465846 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:09.961611 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.343646 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:10.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:12.844008 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.685453 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:11.698438 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:11.698509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:11.730814 2152780 cri.go:89] found id: ""
	I0819 13:24:11.730843 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.730852 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:11.730858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:11.730914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:11.768426 2152780 cri.go:89] found id: ""
	I0819 13:24:11.768460 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.768473 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:11.768483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:11.768555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:11.802718 2152780 cri.go:89] found id: ""
	I0819 13:24:11.802753 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.802765 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:11.802774 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:11.802841 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:11.838016 2152780 cri.go:89] found id: ""
	I0819 13:24:11.838058 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.838066 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:11.838073 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:11.838129 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:11.875869 2152780 cri.go:89] found id: ""
	I0819 13:24:11.875896 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.875904 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:11.875911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:11.875967 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:11.911538 2152780 cri.go:89] found id: ""
	I0819 13:24:11.911572 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.911580 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:11.911586 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:11.911640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:11.947762 2152780 cri.go:89] found id: ""
	I0819 13:24:11.947793 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.947803 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:11.947810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:11.947884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:11.989671 2152780 cri.go:89] found id: ""
	I0819 13:24:11.989706 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.989715 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:11.989726 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:11.989744 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:12.054707 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:12.054733 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:12.054749 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:12.137525 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:12.137572 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:12.175806 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:12.175834 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:12.227606 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:12.227651 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:14.743968 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:14.757178 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:14.757252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:14.791379 2152780 cri.go:89] found id: ""
	I0819 13:24:14.791409 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.791421 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:14.791430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:14.791510 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:14.826221 2152780 cri.go:89] found id: ""
	I0819 13:24:14.826250 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.826259 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:14.826265 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:14.826327 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:14.870239 2152780 cri.go:89] found id: ""
	I0819 13:24:14.870269 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.870281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:14.870345 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:14.870450 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:14.907473 2152780 cri.go:89] found id: ""
	I0819 13:24:14.907505 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.907515 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:14.907524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:14.907594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:14.942547 2152780 cri.go:89] found id: ""
	I0819 13:24:14.942585 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.942594 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:14.942600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:14.942658 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:14.982507 2152780 cri.go:89] found id: ""
	I0819 13:24:14.982536 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.982547 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:14.982555 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:14.982627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:15.020572 2152780 cri.go:89] found id: ""
	I0819 13:24:15.020604 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.020612 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:15.020619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:15.020691 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:15.057076 2152780 cri.go:89] found id: ""
	I0819 13:24:15.057105 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.057114 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:15.057123 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:15.057138 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:15.109111 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:15.109149 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:15.124049 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:15.124076 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:15.187413 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:15.187444 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:15.187460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:15.271009 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:15.271049 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:11.303722 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.804077 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:15.804497 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.961757 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.961949 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:14.844944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.344565 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.810205 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:17.823344 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:17.823417 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:17.862704 2152780 cri.go:89] found id: ""
	I0819 13:24:17.862734 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.862743 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:17.862750 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:17.862807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:17.900371 2152780 cri.go:89] found id: ""
	I0819 13:24:17.900404 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.900414 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:17.900420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:17.900476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:17.935518 2152780 cri.go:89] found id: ""
	I0819 13:24:17.935549 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.935559 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:17.935566 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:17.935622 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:17.972592 2152780 cri.go:89] found id: ""
	I0819 13:24:17.972624 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.972637 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:17.972647 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:17.972717 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:18.008636 2152780 cri.go:89] found id: ""
	I0819 13:24:18.008675 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.008686 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:18.008698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:18.008775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:18.040823 2152780 cri.go:89] found id: ""
	I0819 13:24:18.040857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.040866 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:18.040872 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:18.040934 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:18.079443 2152780 cri.go:89] found id: ""
	I0819 13:24:18.079472 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.079481 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:18.079488 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:18.079547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:18.118717 2152780 cri.go:89] found id: ""
	I0819 13:24:18.118746 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.118756 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:18.118766 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:18.118780 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:18.172588 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:18.172631 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:18.186427 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:18.186457 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:18.258534 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:18.258567 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:18.258585 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:18.335882 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:18.335922 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:17.804682 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.805544 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:16.461777 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:18.462005 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:20.960925 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.345299 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:21.843704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.844383 2152365 pod_ready.go:82] duration metric: took 4m0.006413479s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:22.844420 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 13:24:22.844431 2152365 pod_ready.go:39] duration metric: took 4m3.697261125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:22.844451 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:24:22.844496 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:22.844605 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:22.894620 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:22.894650 2152365 cri.go:89] found id: ""
	I0819 13:24:22.894661 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:22.894731 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.899300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:22.899365 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:22.934260 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:22.934285 2152365 cri.go:89] found id: ""
	I0819 13:24:22.934320 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:22.934380 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.939865 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:22.939927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:22.982433 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:22.982464 2152365 cri.go:89] found id: ""
	I0819 13:24:22.982475 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:22.982550 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.987038 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:22.987119 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:23.025231 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.025264 2152365 cri.go:89] found id: ""
	I0819 13:24:23.025275 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:23.025349 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.030567 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:23.030633 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:23.073663 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.073687 2152365 cri.go:89] found id: ""
	I0819 13:24:23.073695 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:23.073753 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.078563 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:23.078646 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:23.114597 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.114621 2152365 cri.go:89] found id: ""
	I0819 13:24:23.114629 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:23.114688 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.119022 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:23.119091 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:23.161263 2152365 cri.go:89] found id: ""
	I0819 13:24:23.161302 2152365 logs.go:276] 0 containers: []
	W0819 13:24:23.161311 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:23.161317 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:23.161373 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:23.211140 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.211167 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:23.211173 2152365 cri.go:89] found id: ""
	I0819 13:24:23.211182 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:23.211271 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.215875 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.220021 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:23.220046 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.283965 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:23.284004 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:20.878687 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:20.891965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:20.892042 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:20.926029 2152780 cri.go:89] found id: ""
	I0819 13:24:20.926057 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.926070 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:20.926077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:20.926142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:20.961025 2152780 cri.go:89] found id: ""
	I0819 13:24:20.961056 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.961067 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:20.961077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:20.961142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:20.993767 2152780 cri.go:89] found id: ""
	I0819 13:24:20.993800 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.993813 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:20.993821 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:20.993886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:21.025250 2152780 cri.go:89] found id: ""
	I0819 13:24:21.025285 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.025294 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:21.025301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:21.025356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:21.057832 2152780 cri.go:89] found id: ""
	I0819 13:24:21.057863 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.057871 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:21.057877 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:21.057930 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:21.092372 2152780 cri.go:89] found id: ""
	I0819 13:24:21.092400 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.092408 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:21.092419 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:21.092486 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:21.126138 2152780 cri.go:89] found id: ""
	I0819 13:24:21.126177 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.126187 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:21.126194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:21.126262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:21.157779 2152780 cri.go:89] found id: ""
	I0819 13:24:21.157806 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.157814 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:21.157823 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:21.157835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:21.240776 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:21.240813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:21.278705 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:21.278742 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:21.331086 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:21.331125 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:21.346399 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:21.346431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:21.424929 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:23.925325 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:23.938286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:23.938380 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:23.974038 2152780 cri.go:89] found id: ""
	I0819 13:24:23.974070 2152780 logs.go:276] 0 containers: []
	W0819 13:24:23.974079 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:23.974086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:23.974144 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:24.007828 2152780 cri.go:89] found id: ""
	I0819 13:24:24.007865 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.007876 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:24.007883 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:24.007951 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:24.042152 2152780 cri.go:89] found id: ""
	I0819 13:24:24.042182 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.042192 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:24.042198 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:24.042252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:24.076334 2152780 cri.go:89] found id: ""
	I0819 13:24:24.076366 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.076376 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:24.076382 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:24.076451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:24.116333 2152780 cri.go:89] found id: ""
	I0819 13:24:24.116368 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.116380 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:24.116388 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:24.116449 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:24.157126 2152780 cri.go:89] found id: ""
	I0819 13:24:24.157154 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.157163 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:24.157170 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:24.157224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:24.197124 2152780 cri.go:89] found id: ""
	I0819 13:24:24.197153 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.197161 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:24.197167 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:24.197224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:24.232662 2152780 cri.go:89] found id: ""
	I0819 13:24:24.232690 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.232698 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:24.232709 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:24.232721 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.310535 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.310573 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.351436 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:24.351469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:24.409022 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.409061 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.423392 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.423426 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:24.499638 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:22.305024 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:24.305228 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.961886 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:25.460596 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.000428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:27.016949 2152780 kubeadm.go:597] duration metric: took 4m2.883005064s to restartPrimaryControlPlane
	W0819 13:24:27.017029 2152780 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:27.017066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:27.478402 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:27.494351 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:24:27.505001 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:24:27.515444 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:24:27.515467 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:24:27.515523 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:24:27.524646 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:24:27.524697 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:24:27.535343 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:24:27.544402 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:24:27.544456 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:24:27.554655 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.564636 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:24:27.564709 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.575078 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:24:27.584856 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:24:27.584924 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:24:27.595099 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:24:27.675533 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:24:27.675638 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:24:27.827613 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:24:27.827770 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:24:27.827924 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:24:28.032127 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:24:23.323805 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:23.323834 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:23.400053 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:23.400107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:23.438155 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:23.438196 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.483737 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:23.483772 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:23.536000 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:23.536042 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.574211 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:23.574258 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.620141 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:23.620203 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.130515 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.130556 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.185924 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.185959 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.202306 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.202348 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:24.342635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:24.342675 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.895433 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:26.911884 2152365 api_server.go:72] duration metric: took 4m15.522363036s to wait for apiserver process to appear ...
	I0819 13:24:26.911914 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:24:26.911959 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:26.912026 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:26.949386 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.949423 2152365 cri.go:89] found id: ""
	I0819 13:24:26.949436 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:26.949509 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.954854 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:26.954937 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:26.993702 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:26.993732 2152365 cri.go:89] found id: ""
	I0819 13:24:26.993746 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:26.993806 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.997800 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:26.997873 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:27.044488 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:27.044521 2152365 cri.go:89] found id: ""
	I0819 13:24:27.044532 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:27.044612 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.049411 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:27.049488 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:27.098304 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.098328 2152365 cri.go:89] found id: ""
	I0819 13:24:27.098336 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:27.098388 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.102393 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:27.102448 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:27.138927 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.138952 2152365 cri.go:89] found id: ""
	I0819 13:24:27.138960 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:27.139019 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.143058 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:27.143113 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:27.183816 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:27.183846 2152365 cri.go:89] found id: ""
	I0819 13:24:27.183856 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:27.183918 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.188818 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:27.188880 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:27.231255 2152365 cri.go:89] found id: ""
	I0819 13:24:27.231286 2152365 logs.go:276] 0 containers: []
	W0819 13:24:27.231294 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:27.231300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:27.231352 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:27.265165 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.265189 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:27.265194 2152365 cri.go:89] found id: ""
	I0819 13:24:27.265204 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:27.265267 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.269294 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.273771 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:27.273796 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.315407 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:27.315446 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.359309 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:27.359343 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:27.817261 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:27.817316 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:27.834635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:27.834676 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:27.888137 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:27.888194 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:27.940367 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:27.940402 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.981404 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:27.981448 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:28.026666 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:28.026702 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:28.084387 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:28.084419 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:28.160615 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:28.160667 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:28.276547 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:28.276584 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:28.033928 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:24:28.034048 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:24:28.034129 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:24:28.034243 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:24:28.034320 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:24:28.034497 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:24:28.034852 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:24:28.035397 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:24:28.035736 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:24:28.036257 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:24:28.036850 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:24:28.037014 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:24:28.037102 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:24:28.386155 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:24:28.571457 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:24:28.835239 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:24:28.967421 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:24:28.990941 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:24:28.991082 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:24:28.991148 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:24:29.149979 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:24:29.151782 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:24:29.151943 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:24:29.161108 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:24:29.161209 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:24:29.161316 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:24:29.172129 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:24:26.804321 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.304110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.461339 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.462208 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:28.316651 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:28.316701 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:30.876679 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:24:30.882671 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:24:30.883817 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:24:30.883842 2152365 api_server.go:131] duration metric: took 3.971920414s to wait for apiserver health ...
	I0819 13:24:30.883850 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:24:30.883876 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:30.883927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:30.921225 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:30.921255 2152365 cri.go:89] found id: ""
	I0819 13:24:30.921266 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:30.921336 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.925961 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:30.926033 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:30.963834 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:30.963861 2152365 cri.go:89] found id: ""
	I0819 13:24:30.963872 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:30.963947 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.968020 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:30.968085 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:31.004690 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.004717 2152365 cri.go:89] found id: ""
	I0819 13:24:31.004726 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:31.004784 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.008770 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:31.008832 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:31.045412 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.045440 2152365 cri.go:89] found id: ""
	I0819 13:24:31.045451 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:31.045508 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.049682 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:31.049759 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:31.090853 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.090946 2152365 cri.go:89] found id: ""
	I0819 13:24:31.090958 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:31.091037 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.095245 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:31.095311 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:31.138661 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.138693 2152365 cri.go:89] found id: ""
	I0819 13:24:31.138705 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:31.138775 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.143216 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:31.143283 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:31.186554 2152365 cri.go:89] found id: ""
	I0819 13:24:31.186580 2152365 logs.go:276] 0 containers: []
	W0819 13:24:31.186589 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:31.186595 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:31.186645 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:31.226218 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.226257 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:31.226261 2152365 cri.go:89] found id: ""
	I0819 13:24:31.226270 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:31.226333 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.230602 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.234269 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:31.234290 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:31.299071 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:31.299107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.336594 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:31.336633 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:31.745390 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:31.745441 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:31.797446 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:31.797479 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.838882 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:31.838916 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.876830 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:31.876871 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.935426 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:31.935461 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.975182 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:31.975212 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:31.989631 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:31.989663 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:32.093154 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:32.093190 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:32.142869 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:32.142906 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:32.179008 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:32.179043 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:34.726569 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:24:34.726604 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.726609 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.726613 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.726619 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.726622 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.726625 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.726633 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.726636 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.726644 2152365 system_pods.go:74] duration metric: took 3.84278789s to wait for pod list to return data ...
	I0819 13:24:34.726654 2152365 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:24:34.729710 2152365 default_sa.go:45] found service account: "default"
	I0819 13:24:34.729745 2152365 default_sa.go:55] duration metric: took 3.082291ms for default service account to be created ...
	I0819 13:24:34.729758 2152365 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:24:34.734698 2152365 system_pods.go:86] 8 kube-system pods found
	I0819 13:24:34.734734 2152365 system_pods.go:89] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.734744 2152365 system_pods.go:89] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.734752 2152365 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.734758 2152365 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.734765 2152365 system_pods.go:89] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.734768 2152365 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.734775 2152365 system_pods.go:89] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.734784 2152365 system_pods.go:89] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.734792 2152365 system_pods.go:126] duration metric: took 5.027633ms to wait for k8s-apps to be running ...
	I0819 13:24:34.734802 2152365 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:24:34.734863 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:34.756925 2152365 system_svc.go:56] duration metric: took 22.110614ms WaitForService to wait for kubelet
	I0819 13:24:34.756964 2152365 kubeadm.go:582] duration metric: took 4m23.367451416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:24:34.757007 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:24:34.760465 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:24:34.760497 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:24:34.760510 2152365 node_conditions.go:105] duration metric: took 3.498566ms to run NodePressure ...
	I0819 13:24:34.760524 2152365 start.go:241] waiting for startup goroutines ...
	I0819 13:24:34.760531 2152365 start.go:246] waiting for cluster config update ...
	I0819 13:24:34.760552 2152365 start.go:255] writing updated cluster config ...
	I0819 13:24:34.760888 2152365 ssh_runner.go:195] Run: rm -f paused
	I0819 13:24:34.814139 2152365 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:24:34.815906 2152365 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-038572" cluster and "default" namespace by default
	I0819 13:24:31.305059 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.804408 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.804797 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:31.462264 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.961404 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.961719 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.305696 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.804500 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.460369 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.462138 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.804768 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.804803 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.462412 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.961368 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:46.804973 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.304762 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:47.460607 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.461589 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.804815 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:54.304854 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.961120 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:53.962098 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:55.962219 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:56.804112 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:57.298511 2153523 pod_ready.go:82] duration metric: took 4m0.000785617s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:57.298548 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:24:57.298569 2153523 pod_ready.go:39] duration metric: took 4m6.439346081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:57.298603 2153523 kubeadm.go:597] duration metric: took 4m16.103432347s to restartPrimaryControlPlane
	W0819 13:24:57.298676 2153523 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:57.298707 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:58.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:00.961514 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:03.460885 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:05.461736 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:09.173035 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:25:09.173672 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:09.173913 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:07.962080 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:10.461273 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:14.174468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:14.174714 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:12.961725 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:15.461361 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:17.461731 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:19.961772 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:23.413286 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.114546859s)
	I0819 13:25:23.413389 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:23.440190 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:25:23.450707 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:25:23.462984 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:25:23.463003 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:25:23.463046 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:25:23.472381 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:25:23.472437 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:25:23.482090 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:25:23.491789 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:25:23.491845 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:25:23.501722 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.510879 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:25:23.510936 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.520464 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:25:23.529582 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:25:23.529642 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:25:23.539089 2153523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:25:23.585240 2153523 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:25:23.585319 2153523 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:25:23.712867 2153523 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:25:23.712994 2153523 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:25:23.713118 2153523 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:25:23.722519 2153523 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:25:23.724354 2153523 out.go:235]   - Generating certificates and keys ...
	I0819 13:25:23.724436 2153523 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:25:23.724517 2153523 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:25:23.724598 2153523 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:25:23.724648 2153523 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:25:23.724702 2153523 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:25:23.724745 2153523 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:25:23.724795 2153523 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:25:23.724847 2153523 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:25:23.724904 2153523 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:25:23.725051 2153523 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:25:23.725125 2153523 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:25:23.725238 2153523 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:25:23.779977 2153523 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:25:23.930648 2153523 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:25:24.124057 2153523 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:25:24.314884 2153523 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:25:24.494351 2153523 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:25:24.495024 2153523 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:25:24.497778 2153523 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:25:24.175179 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:24.175432 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:24.499543 2153523 out.go:235]   - Booting up control plane ...
	I0819 13:25:24.499671 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:25:24.499762 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:25:24.499864 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:25:24.520379 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:25:24.532777 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:25:24.532853 2153523 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:25:24.666433 2153523 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:25:24.666580 2153523 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:25:25.168265 2153523 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.973317ms
	I0819 13:25:25.168373 2153523 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:25:21.961823 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:24.462014 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:30.169290 2153523 kubeadm.go:310] [api-check] The API server is healthy after 5.001301539s
	I0819 13:25:30.182039 2153523 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:25:30.198499 2153523 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:25:30.224273 2153523 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:25:30.224513 2153523 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-823470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:25:30.236397 2153523 kubeadm.go:310] [bootstrap-token] Using token: uoaia1.lo1g4dbd1uwo2t3t
	I0819 13:25:30.237712 2153523 out.go:235]   - Configuring RBAC rules ...
	I0819 13:25:30.237846 2153523 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:25:30.245121 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:25:30.254416 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:25:30.257701 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:25:30.262104 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:25:30.266592 2153523 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:25:30.577306 2153523 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:25:26.961302 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:28.962130 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:31.051204 2153523 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:25:31.576457 2153523 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:25:31.577557 2153523 kubeadm.go:310] 
	I0819 13:25:31.577644 2153523 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:25:31.577653 2153523 kubeadm.go:310] 
	I0819 13:25:31.577728 2153523 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:25:31.577738 2153523 kubeadm.go:310] 
	I0819 13:25:31.577786 2153523 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:25:31.577872 2153523 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:25:31.577949 2153523 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:25:31.577959 2153523 kubeadm.go:310] 
	I0819 13:25:31.578041 2153523 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:25:31.578051 2153523 kubeadm.go:310] 
	I0819 13:25:31.578111 2153523 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:25:31.578120 2153523 kubeadm.go:310] 
	I0819 13:25:31.578174 2153523 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:25:31.578266 2153523 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:25:31.578354 2153523 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:25:31.578383 2153523 kubeadm.go:310] 
	I0819 13:25:31.578524 2153523 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:25:31.578624 2153523 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:25:31.578641 2153523 kubeadm.go:310] 
	I0819 13:25:31.578765 2153523 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.578886 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:25:31.578922 2153523 kubeadm.go:310] 	--control-plane 
	I0819 13:25:31.578931 2153523 kubeadm.go:310] 
	I0819 13:25:31.579045 2153523 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:25:31.579055 2153523 kubeadm.go:310] 
	I0819 13:25:31.579151 2153523 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.579277 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:25:31.580242 2153523 kubeadm.go:310] W0819 13:25:23.534803    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580611 2153523 kubeadm.go:310] W0819 13:25:23.535422    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580758 2153523 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:25:31.580793 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:25:31.580803 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:25:31.582322 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:25:31.583520 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:25:31.594967 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:25:31.613491 2153523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:25:31.613590 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-823470 minikube.k8s.io/updated_at=2024_08_19T13_25_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=embed-certs-823470 minikube.k8s.io/primary=true
	I0819 13:25:31.613591 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.644190 2153523 ops.go:34] apiserver oom_adj: -16
	I0819 13:25:31.824270 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.324994 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.824955 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.325186 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.824797 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.325222 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.824306 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.325248 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.825154 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.461327 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:33.461401 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:35.455588 2151869 pod_ready.go:82] duration metric: took 4m0.000741427s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	E0819 13:25:35.455630 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:25:35.455654 2151869 pod_ready.go:39] duration metric: took 4m14.499879559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:35.455689 2151869 kubeadm.go:597] duration metric: took 4m22.29129413s to restartPrimaryControlPlane
	W0819 13:25:35.455763 2151869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:25:35.455798 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:25:36.324463 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:36.464343 2153523 kubeadm.go:1113] duration metric: took 4.850819101s to wait for elevateKubeSystemPrivileges
	I0819 13:25:36.464399 2153523 kubeadm.go:394] duration metric: took 4m55.327792066s to StartCluster
	I0819 13:25:36.464441 2153523 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.464553 2153523 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:25:36.466407 2153523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.466694 2153523 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:25:36.466786 2153523 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:25:36.466883 2153523 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-823470"
	I0819 13:25:36.466903 2153523 addons.go:69] Setting default-storageclass=true in profile "embed-certs-823470"
	I0819 13:25:36.466908 2153523 addons.go:69] Setting metrics-server=true in profile "embed-certs-823470"
	I0819 13:25:36.466924 2153523 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-823470"
	W0819 13:25:36.466935 2153523 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:25:36.466943 2153523 addons.go:234] Setting addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:36.466949 2153523 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-823470"
	W0819 13:25:36.466961 2153523 addons.go:243] addon metrics-server should already be in state true
	I0819 13:25:36.466968 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:25:36.466987 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.466998 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.467388 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467421 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467422 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467424 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.468449 2153523 out.go:177] * Verifying Kubernetes components...
	I0819 13:25:36.469946 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:25:36.484726 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0819 13:25:36.484740 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I0819 13:25:36.484824 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0819 13:25:36.485224 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485280 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485303 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485822 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485841 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485824 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485881 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485956 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485982 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.486206 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486260 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486267 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486382 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.486787 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486803 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486819 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.486836 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.490147 2153523 addons.go:234] Setting addon default-storageclass=true in "embed-certs-823470"
	W0819 13:25:36.490169 2153523 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:25:36.490198 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.490599 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.490628 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.506245 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
	I0819 13:25:36.506573 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0819 13:25:36.506955 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507024 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507467 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507485 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507599 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507619 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507896 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.507899 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.508055 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.508657 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.508682 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.509546 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.511452 2153523 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:25:36.512660 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:25:36.512680 2153523 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:25:36.512702 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.515934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516401 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.516422 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516608 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.516810 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.517007 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.517160 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.517459 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0819 13:25:36.517826 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.518316 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.518339 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.518689 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.518859 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.520336 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.521729 2153523 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:25:36.522775 2153523 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.522793 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:25:36.522813 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.525839 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526467 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.526505 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526736 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.527191 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.527384 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.527540 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.527918 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0819 13:25:36.528354 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.528868 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.528890 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.529243 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.529418 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.530881 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.531125 2153523 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.531138 2153523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:25:36.531151 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.534168 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534579 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.534603 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534703 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.534855 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.534995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.535105 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.723562 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:25:36.740697 2153523 node_ready.go:35] waiting up to 6m0s for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749268 2153523 node_ready.go:49] node "embed-certs-823470" has status "Ready":"True"
	I0819 13:25:36.749294 2153523 node_ready.go:38] duration metric: took 8.560448ms for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749306 2153523 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:36.756208 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:36.829190 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.845456 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:25:36.845487 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:25:36.857828 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.928557 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:25:36.928590 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:25:36.979832 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:36.979864 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:25:37.086105 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:37.498852 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.498909 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498924 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499189 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499210 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499218 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499225 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499393 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499392 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499405 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499419 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499426 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499762 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499774 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499771 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499761 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499815 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499835 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510580 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.510601 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.510894 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.510922 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510924 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.721765 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.721791 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722119 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722143 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722171 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.722180 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722502 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722513 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722525 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722538 2153523 addons.go:475] Verifying addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:37.724029 2153523 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:25:37.725177 2153523 addons.go:510] duration metric: took 1.258399079s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:25:38.763132 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:40.765558 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:44.175962 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:44.176268 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:43.263762 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:45.765551 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:46.265160 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.265189 2153523 pod_ready.go:82] duration metric: took 9.50894907s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.265202 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270706 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.270733 2153523 pod_ready.go:82] duration metric: took 5.521974ms for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270745 2153523 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275061 2153523 pod_ready.go:93] pod "etcd-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.275095 2153523 pod_ready.go:82] duration metric: took 4.331701ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275106 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280251 2153523 pod_ready.go:93] pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.280269 2153523 pod_ready.go:82] duration metric: took 5.157209ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280278 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284437 2153523 pod_ready.go:93] pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.284456 2153523 pod_ready.go:82] duration metric: took 4.173028ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284464 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660549 2153523 pod_ready.go:93] pod "kube-proxy-cn9sl" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.660577 2153523 pod_ready.go:82] duration metric: took 376.106459ms for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660588 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060405 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:47.060432 2153523 pod_ready.go:82] duration metric: took 399.837833ms for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060440 2153523 pod_ready.go:39] duration metric: took 10.311121311s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:47.060458 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:25:47.060513 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:25:47.076832 2153523 api_server.go:72] duration metric: took 10.610087948s to wait for apiserver process to appear ...
	I0819 13:25:47.076869 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:25:47.076894 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:25:47.081037 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:25:47.081960 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:25:47.081984 2153523 api_server.go:131] duration metric: took 5.106291ms to wait for apiserver health ...
	I0819 13:25:47.081993 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:25:47.264787 2153523 system_pods.go:59] 9 kube-system pods found
	I0819 13:25:47.264817 2153523 system_pods.go:61] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.264822 2153523 system_pods.go:61] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.264826 2153523 system_pods.go:61] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.264830 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.264835 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.264839 2153523 system_pods.go:61] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.264842 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.264849 2153523 system_pods.go:61] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.264853 2153523 system_pods.go:61] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.264863 2153523 system_pods.go:74] duration metric: took 182.863608ms to wait for pod list to return data ...
	I0819 13:25:47.264870 2153523 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:25:47.461938 2153523 default_sa.go:45] found service account: "default"
	I0819 13:25:47.461968 2153523 default_sa.go:55] duration metric: took 197.091755ms for default service account to be created ...
	I0819 13:25:47.461978 2153523 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:25:47.664070 2153523 system_pods.go:86] 9 kube-system pods found
	I0819 13:25:47.664101 2153523 system_pods.go:89] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.664106 2153523 system_pods.go:89] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.664110 2153523 system_pods.go:89] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.664115 2153523 system_pods.go:89] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.664119 2153523 system_pods.go:89] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.664122 2153523 system_pods.go:89] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.664125 2153523 system_pods.go:89] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.664131 2153523 system_pods.go:89] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.664135 2153523 system_pods.go:89] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.664146 2153523 system_pods.go:126] duration metric: took 202.162032ms to wait for k8s-apps to be running ...
	I0819 13:25:47.664157 2153523 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:25:47.664259 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:47.680000 2153523 system_svc.go:56] duration metric: took 15.832426ms WaitForService to wait for kubelet
	I0819 13:25:47.680036 2153523 kubeadm.go:582] duration metric: took 11.213303025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:25:47.680063 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:25:47.861730 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:25:47.861762 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:25:47.861774 2153523 node_conditions.go:105] duration metric: took 181.706418ms to run NodePressure ...
	I0819 13:25:47.861786 2153523 start.go:241] waiting for startup goroutines ...
	I0819 13:25:47.861793 2153523 start.go:246] waiting for cluster config update ...
	I0819 13:25:47.861804 2153523 start.go:255] writing updated cluster config ...
	I0819 13:25:47.862066 2153523 ssh_runner.go:195] Run: rm -f paused
	I0819 13:25:47.913161 2153523 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:25:47.915029 2153523 out.go:177] * Done! kubectl is now configured to use "embed-certs-823470" cluster and "default" namespace by default
	I0819 13:26:01.591066 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.135241188s)
	I0819 13:26:01.591140 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:01.620641 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:26:01.633612 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:01.647057 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:01.647085 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:01.647144 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:01.663839 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:01.663903 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:01.681570 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:01.692967 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:01.693038 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:01.714743 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.723974 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:01.724044 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.734044 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:01.743276 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:01.743326 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:01.752586 2151869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:01.798481 2151869 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:26:01.798564 2151869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:01.909646 2151869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:01.909801 2151869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:01.909923 2151869 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:26:01.923525 2151869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:01.925556 2151869 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:01.925670 2151869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:01.925760 2151869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:01.925900 2151869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:01.925991 2151869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:01.926102 2151869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:01.926162 2151869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:01.926248 2151869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:01.926334 2151869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:01.926449 2151869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:01.926567 2151869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:01.926632 2151869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:01.926707 2151869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:02.049009 2151869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:02.310249 2151869 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:26:02.425638 2151869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:02.664112 2151869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:03.165747 2151869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:03.166413 2151869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:03.169075 2151869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:03.170768 2151869 out.go:235]   - Booting up control plane ...
	I0819 13:26:03.170893 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:03.171003 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:03.171184 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:03.190081 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:03.197988 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:03.198058 2151869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:03.327052 2151869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:26:03.327180 2151869 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:26:04.332623 2151869 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002268296s
	I0819 13:26:04.332744 2151869 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:26:09.332759 2151869 kubeadm.go:310] [api-check] The API server is healthy after 5.001673024s
	I0819 13:26:09.344645 2151869 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:26:09.359806 2151869 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:26:09.393264 2151869 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:26:09.393499 2151869 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-441627 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:26:09.413036 2151869 kubeadm.go:310] [bootstrap-token] Using token: cva6zd.b3089a5fajgqfd6r
	I0819 13:26:09.414921 2151869 out.go:235]   - Configuring RBAC rules ...
	I0819 13:26:09.415128 2151869 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:26:09.420470 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:26:09.431168 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:26:09.438188 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:26:09.443406 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:26:09.450497 2151869 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:26:09.737815 2151869 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:26:10.165062 2151869 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:26:10.738017 2151869 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:26:10.739202 2151869 kubeadm.go:310] 
	I0819 13:26:10.739309 2151869 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:26:10.739321 2151869 kubeadm.go:310] 
	I0819 13:26:10.739415 2151869 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:26:10.739426 2151869 kubeadm.go:310] 
	I0819 13:26:10.739469 2151869 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:26:10.739558 2151869 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:26:10.739645 2151869 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:26:10.739662 2151869 kubeadm.go:310] 
	I0819 13:26:10.739746 2151869 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:26:10.739758 2151869 kubeadm.go:310] 
	I0819 13:26:10.739833 2151869 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:26:10.739842 2151869 kubeadm.go:310] 
	I0819 13:26:10.739926 2151869 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:26:10.740039 2151869 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:26:10.740116 2151869 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:26:10.740134 2151869 kubeadm.go:310] 
	I0819 13:26:10.740272 2151869 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:26:10.740376 2151869 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:26:10.740387 2151869 kubeadm.go:310] 
	I0819 13:26:10.740487 2151869 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.740651 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:26:10.740689 2151869 kubeadm.go:310] 	--control-plane 
	I0819 13:26:10.740699 2151869 kubeadm.go:310] 
	I0819 13:26:10.740808 2151869 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:26:10.740822 2151869 kubeadm.go:310] 
	I0819 13:26:10.740931 2151869 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.741086 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:26:10.743032 2151869 kubeadm.go:310] W0819 13:26:01.774566    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743353 2151869 kubeadm.go:310] W0819 13:26:01.775291    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743487 2151869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:10.743516 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:26:10.743530 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:26:10.745175 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:26:10.746234 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:26:10.756423 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:26:10.774266 2151869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:26:10.774332 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:10.774349 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-441627 minikube.k8s.io/updated_at=2024_08_19T13_26_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=no-preload-441627 minikube.k8s.io/primary=true
	I0819 13:26:10.800516 2151869 ops.go:34] apiserver oom_adj: -16
	I0819 13:26:10.991369 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.492114 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.992301 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.491433 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.991812 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.492015 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.991704 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.492129 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.992304 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:15.079535 2151869 kubeadm.go:1113] duration metric: took 4.305270284s to wait for elevateKubeSystemPrivileges
	I0819 13:26:15.079607 2151869 kubeadm.go:394] duration metric: took 5m1.9669805s to StartCluster
	I0819 13:26:15.079635 2151869 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.079759 2151869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:26:15.081957 2151869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.082240 2151869 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:26:15.082282 2151869 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:26:15.082374 2151869 addons.go:69] Setting storage-provisioner=true in profile "no-preload-441627"
	I0819 13:26:15.082413 2151869 addons.go:234] Setting addon storage-provisioner=true in "no-preload-441627"
	I0819 13:26:15.082409 2151869 addons.go:69] Setting metrics-server=true in profile "no-preload-441627"
	W0819 13:26:15.082425 2151869 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:26:15.082389 2151869 addons.go:69] Setting default-storageclass=true in profile "no-preload-441627"
	I0819 13:26:15.082449 2151869 addons.go:234] Setting addon metrics-server=true in "no-preload-441627"
	I0819 13:26:15.082461 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	W0819 13:26:15.082463 2151869 addons.go:243] addon metrics-server should already be in state true
	I0819 13:26:15.082479 2151869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-441627"
	I0819 13:26:15.082503 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.082514 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082869 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082920 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082925 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082947 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.083691 2151869 out.go:177] * Verifying Kubernetes components...
	I0819 13:26:15.085493 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:26:15.101376 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
	I0819 13:26:15.102091 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.102685 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.102703 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.103169 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.103181 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0819 13:26:15.103193 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0819 13:26:15.103361 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.103663 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.103732 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.104300 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104322 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.104710 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104730 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.105081 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105409 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105905 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.105938 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.106084 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.106119 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.107248 2151869 addons.go:234] Setting addon default-storageclass=true in "no-preload-441627"
	W0819 13:26:15.107271 2151869 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:26:15.107302 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.107654 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.107713 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.122709 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0819 13:26:15.123128 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.123869 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.123898 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.124416 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.124634 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.126217 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I0819 13:26:15.126791 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.127400 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.127419 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.127698 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.128022 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.128336 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.129232 2151869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:26:15.130058 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.130226 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:26:15.130243 2151869 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:26:15.130262 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.131335 2151869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:26:15.131652 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45447
	I0819 13:26:15.132227 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.132570 2151869 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.132586 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:26:15.132605 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.133517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134218 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.134244 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134600 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.134738 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.134755 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.134852 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.135030 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.135097 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.135284 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.135673 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.135702 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.136748 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137177 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.137197 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137442 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.137613 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.137765 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.137924 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.154937 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0819 13:26:15.155407 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.155835 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.155847 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.156238 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.156416 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.158575 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.158795 2151869 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.158805 2151869 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:26:15.158818 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.162078 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162519 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.162534 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162825 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.162977 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.163114 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.163198 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.309062 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:26:15.341525 2151869 node_ready.go:35] waiting up to 6m0s for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354270 2151869 node_ready.go:49] node "no-preload-441627" has status "Ready":"True"
	I0819 13:26:15.354307 2151869 node_ready.go:38] duration metric: took 12.742932ms for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354320 2151869 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:15.363385 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:15.427138 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.429228 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.517567 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:26:15.517597 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:26:15.626356 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:26:15.626393 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:26:15.677045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677072 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677414 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677471 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.677483 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677490 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677729 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677749 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.683924 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.683943 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.684224 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.684249 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.684252 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:15.701949 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:15.701980 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:26:15.743920 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:16.110007 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110035 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110393 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110415 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.110426 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110439 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110691 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110712 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426076 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426617 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.426639 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426650 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426660 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426669 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426948 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426994 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.427011 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.427024 2151869 addons.go:475] Verifying addon metrics-server=true in "no-preload-441627"
	I0819 13:26:16.428916 2151869 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 13:26:16.430020 2151869 addons.go:510] duration metric: took 1.347734507s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 13:26:17.369079 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:19.369662 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:21.369721 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:23.368863 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.368886 2151869 pod_ready.go:82] duration metric: took 8.005467546s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.368895 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373851 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.373873 2151869 pod_ready.go:82] duration metric: took 4.971429ms for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373884 2151869 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378381 2151869 pod_ready.go:93] pod "etcd-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.378405 2151869 pod_ready.go:82] duration metric: took 4.513601ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378417 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382700 2151869 pod_ready.go:93] pod "kube-apiserver-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.382719 2151869 pod_ready.go:82] duration metric: took 4.295583ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382728 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387145 2151869 pod_ready.go:93] pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.387168 2151869 pod_ready.go:82] duration metric: took 4.434097ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387179 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768694 2151869 pod_ready.go:93] pod "kube-proxy-9jhss" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.768722 2151869 pod_ready.go:82] duration metric: took 381.534902ms for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768735 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167861 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:24.167888 2151869 pod_ready.go:82] duration metric: took 399.144976ms for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167896 2151869 pod_ready.go:39] duration metric: took 8.81356327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:24.167913 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:26:24.167967 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:26:24.187913 2151869 api_server.go:72] duration metric: took 9.105632249s to wait for apiserver process to appear ...
	I0819 13:26:24.187938 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:26:24.187960 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:26:24.192758 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:26:24.193797 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:26:24.193822 2151869 api_server.go:131] duration metric: took 5.87693ms to wait for apiserver health ...
	I0819 13:26:24.193833 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:26:24.371261 2151869 system_pods.go:59] 9 kube-system pods found
	I0819 13:26:24.371291 2151869 system_pods.go:61] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.371296 2151869 system_pods.go:61] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.371300 2151869 system_pods.go:61] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.371304 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.371308 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.371312 2151869 system_pods.go:61] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.371317 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.371325 2151869 system_pods.go:61] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.371331 2151869 system_pods.go:61] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.371342 2151869 system_pods.go:74] duration metric: took 177.503046ms to wait for pod list to return data ...
	I0819 13:26:24.371355 2151869 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:26:24.568039 2151869 default_sa.go:45] found service account: "default"
	I0819 13:26:24.568068 2151869 default_sa.go:55] duration metric: took 196.702869ms for default service account to be created ...
	I0819 13:26:24.568078 2151869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:26:24.771705 2151869 system_pods.go:86] 9 kube-system pods found
	I0819 13:26:24.771734 2151869 system_pods.go:89] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.771741 2151869 system_pods.go:89] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.771745 2151869 system_pods.go:89] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.771749 2151869 system_pods.go:89] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.771753 2151869 system_pods.go:89] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.771756 2151869 system_pods.go:89] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.771759 2151869 system_pods.go:89] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.771765 2151869 system_pods.go:89] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.771770 2151869 system_pods.go:89] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.771781 2151869 system_pods.go:126] duration metric: took 203.697331ms to wait for k8s-apps to be running ...
	I0819 13:26:24.771794 2151869 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:26:24.771837 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.788088 2151869 system_svc.go:56] duration metric: took 16.285619ms WaitForService to wait for kubelet
	I0819 13:26:24.788122 2151869 kubeadm.go:582] duration metric: took 9.705841529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:26:24.788157 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:26:24.968341 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:26:24.968377 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:26:24.968392 2151869 node_conditions.go:105] duration metric: took 180.211418ms to run NodePressure ...
	I0819 13:26:24.968409 2151869 start.go:241] waiting for startup goroutines ...
	I0819 13:26:24.968418 2151869 start.go:246] waiting for cluster config update ...
	I0819 13:26:24.968429 2151869 start.go:255] writing updated cluster config ...
	I0819 13:26:24.968747 2151869 ssh_runner.go:195] Run: rm -f paused
	I0819 13:26:25.025595 2151869 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:26:25.027536 2151869 out.go:177] * Done! kubectl is now configured to use "no-preload-441627" cluster and "default" namespace by default
	I0819 13:26:24.177907 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:26:24.178201 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:26:24.178214 2152780 kubeadm.go:310] 
	I0819 13:26:24.178291 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:26:24.178372 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:26:24.178382 2152780 kubeadm.go:310] 
	I0819 13:26:24.178424 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:26:24.178476 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:26:24.178656 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:26:24.178674 2152780 kubeadm.go:310] 
	I0819 13:26:24.178792 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:26:24.178834 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:26:24.178879 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:26:24.178885 2152780 kubeadm.go:310] 
	I0819 13:26:24.179039 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:26:24.179150 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:26:24.179160 2152780 kubeadm.go:310] 
	I0819 13:26:24.179343 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:26:24.179495 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:26:24.179612 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:26:24.179737 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:26:24.179753 2152780 kubeadm.go:310] 
	I0819 13:26:24.180394 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:24.180518 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:26:24.180601 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 13:26:24.180756 2152780 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 13:26:24.180807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:26:24.684112 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.697957 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:24.707579 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:24.707600 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:24.707649 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:24.717570 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:24.717642 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:24.727069 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:24.735692 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:24.735757 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:24.744522 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.753036 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:24.753090 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.763455 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:24.772780 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:24.772839 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:24.782898 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:24.856625 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:26:24.856732 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:25.003965 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:25.004118 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:25.004294 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:26:25.205556 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:25.207347 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:25.207462 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:25.207553 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:25.207671 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:25.207759 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:25.207840 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:25.207910 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:25.207978 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:25.208063 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:25.208526 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:25.209357 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:25.209668 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:25.209816 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:25.424000 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:25.667776 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:25.723895 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:25.817349 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:25.834327 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:25.836860 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:25.837016 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:25.992443 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:25.994085 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:26:25.994223 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:26.007880 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:26.009553 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:26.013310 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:26.014264 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:27:06.016907 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:27:06.017042 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:06.017274 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:11.017468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:11.017758 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:21.018328 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:21.018585 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:41.019394 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:41.019632 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018489 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:28:21.018784 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018807 2152780 kubeadm.go:310] 
	I0819 13:28:21.018879 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:28:21.018963 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:28:21.018971 2152780 kubeadm.go:310] 
	I0819 13:28:21.019000 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:28:21.019030 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:28:21.019122 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:28:21.019130 2152780 kubeadm.go:310] 
	I0819 13:28:21.019233 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:28:21.019276 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:28:21.019329 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:28:21.019337 2152780 kubeadm.go:310] 
	I0819 13:28:21.019471 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:28:21.019607 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:28:21.019620 2152780 kubeadm.go:310] 
	I0819 13:28:21.019776 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:28:21.019895 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:28:21.020016 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:28:21.020125 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:28:21.020139 2152780 kubeadm.go:310] 
	I0819 13:28:21.020943 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:28:21.021058 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:28:21.021158 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 13:28:21.021292 2152780 kubeadm.go:394] duration metric: took 7m56.942541209s to StartCluster
	I0819 13:28:21.021375 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:28:21.021456 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:28:21.068572 2152780 cri.go:89] found id: ""
	I0819 13:28:21.068613 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.068624 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:28:21.068633 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:28:21.068704 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:28:21.108484 2152780 cri.go:89] found id: ""
	I0819 13:28:21.108514 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.108532 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:28:21.108540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:28:21.108611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:28:21.151863 2152780 cri.go:89] found id: ""
	I0819 13:28:21.151894 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.151905 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:28:21.151914 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:28:21.151992 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:28:21.191844 2152780 cri.go:89] found id: ""
	I0819 13:28:21.191873 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.191883 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:28:21.191891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:28:21.191970 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:28:21.230545 2152780 cri.go:89] found id: ""
	I0819 13:28:21.230580 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.230588 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:28:21.230595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:28:21.230650 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:28:21.264157 2152780 cri.go:89] found id: ""
	I0819 13:28:21.264211 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.264225 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:28:21.264233 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:28:21.264301 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:28:21.297523 2152780 cri.go:89] found id: ""
	I0819 13:28:21.297558 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.297566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:28:21.297573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:28:21.297630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:28:21.336765 2152780 cri.go:89] found id: ""
	I0819 13:28:21.336793 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.336802 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:28:21.336815 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:28:21.336839 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:28:21.387668 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:28:21.387711 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:28:21.402131 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:28:21.402173 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:28:21.473587 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:28:21.473615 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:28:21.473632 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:28:21.576567 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:28:21.576610 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 13:28:21.614859 2152780 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 13:28:21.614919 2152780 out.go:270] * 
	W0819 13:28:21.615011 2152780 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.615032 2152780 out.go:270] * 
	W0819 13:28:21.615837 2152780 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:28:21.619494 2152780 out.go:201] 
	W0819 13:28:21.620743 2152780 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.620793 2152780 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 13:28:21.620813 2152780 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 13:28:21.622143 2152780 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.473966116Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074103473935744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=421c37e8-a0b6-42dd-8f23-09a262d6e6a9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.474625165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce850906-fa78-41ff-a5f7-3ed09684ac4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.474696566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce850906-fa78-41ff-a5f7-3ed09684ac4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.474731745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ce850906-fa78-41ff-a5f7-3ed09684ac4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.506457745Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3aca386b-5dca-4c62-9dc5-8b010624da90 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.506579157Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3aca386b-5dca-4c62-9dc5-8b010624da90 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.507878839Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=445550af-d794-4893-8d7f-43d6c76d8833 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.508395907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074103508365316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=445550af-d794-4893-8d7f-43d6c76d8833 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.508996100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05cfa258-2596-48fd-89af-1e13a6cd114c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.509053958Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05cfa258-2596-48fd-89af-1e13a6cd114c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.509128153Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=05cfa258-2596-48fd-89af-1e13a6cd114c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.545729056Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efffbbc7-5a13-4387-9761-e799ae25fa97 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.545821880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efffbbc7-5a13-4387-9761-e799ae25fa97 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.546781353Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18817ce4-f4b2-407b-a66d-a64a28fe06a4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.547223011Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074103547195098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18817ce4-f4b2-407b-a66d-a64a28fe06a4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.547724463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2f022a3-a939-4e5c-8e78-cb886276cd4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.548512746Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2f022a3-a939-4e5c-8e78-cb886276cd4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.548685792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e2f022a3-a939-4e5c-8e78-cb886276cd4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.588956117Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4136d6c0-1167-40b0-a5cf-4de74e8d0f93 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.589046672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4136d6c0-1167-40b0-a5cf-4de74e8d0f93 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.590395979Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64721573-6522-4ea1-b6fa-99bf217801e7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.590757352Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074103590735096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64721573-6522-4ea1-b6fa-99bf217801e7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.591387000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e189b525-f458-4cd3-adb8-87064cb59761 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.591456148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e189b525-f458-4cd3-adb8-87064cb59761 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:28:23 old-k8s-version-620467 crio[650]: time="2024-08-19 13:28:23.591507633Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e189b525-f458-4cd3-adb8-87064cb59761 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug19 13:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052662] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041462] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.891894] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.505316] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.613375] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.587338] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.059255] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064534] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.190720] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.158664] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.305924] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.524185] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.059446] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.840054] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	[ +13.606049] kauditd_printk_skb: 46 callbacks suppressed
	[Aug19 13:24] systemd-fstab-generator[5082]: Ignoring "noauto" option for root device
	[Aug19 13:26] systemd-fstab-generator[5362]: Ignoring "noauto" option for root device
	[  +0.067230] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:28:23 up 8 min,  0 users,  load average: 0.01, 0.08, 0.06
	Linux old-k8s-version-620467 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5540]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5540]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000b584e0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000a33830, 0x24, 0x0, ...)
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5540]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5540]: net.(*Dialer).DialContext(0xc000170d20, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000a33830, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5540]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5540]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000936020, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000a33830, 0x24, 0x60, 0x7f54a0e44738, 0x118, ...)
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5540]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5540]: net/http.(*Transport).dial(0xc0006b2000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000a33830, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5540]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5540]: net/http.(*Transport).dialConn(0xc0006b2000, 0x4f7fe00, 0xc000052030, 0x0, 0xc00056e540, 0x5, 0xc000a33830, 0x24, 0x0, 0xc0006dbd40, ...)
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5540]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5540]: net/http.(*Transport).dialConnFor(0xc0006b2000, 0xc0009eb6b0)
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5540]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5540]: created by net/http.(*Transport).queueForDial
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5540]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 19 13:28:21 old-k8s-version-620467 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 19 13:28:21 old-k8s-version-620467 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 19 13:28:21 old-k8s-version-620467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 19 13:28:21 old-k8s-version-620467 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 19 13:28:21 old-k8s-version-620467 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5604]: I0819 13:28:21.895147    5604 server.go:416] Version: v1.20.0
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5604]: I0819 13:28:21.895486    5604 server.go:837] Client rotation is on, will bootstrap in background
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5604]: I0819 13:28:21.901896    5604 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5604]: W0819 13:28:21.906639    5604 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 19 13:28:21 old-k8s-version-620467 kubelet[5604]: I0819 13:28:21.907694    5604 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-620467 -n old-k8s-version-620467
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-620467 -n old-k8s-version-620467: exit status 2 (233.501942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-620467" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (699.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-823470 -n embed-certs-823470
E0819 13:17:30.744613 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-823470 -n embed-certs-823470: exit status 3 (3.167597571s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:17:31.660550 2153000 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.50:22: connect: no route to host
	E0819 13:17:31.660572 2153000 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.50:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-823470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-823470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.166459339s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.50:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-823470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-823470 -n embed-certs-823470
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-823470 -n embed-certs-823470: exit status 3 (3.050728975s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:17:40.876511 2153066 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.50:22: connect: no route to host
	E0819 13:17:40.876530 2153066 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.50:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-823470" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0819 13:24:40.836872 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:24:58.879244 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:25:39.336570 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-038572 -n default-k8s-diff-port-038572
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-19 13:33:35.361451356 +0000 UTC m=+5815.996241632
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038572 -n default-k8s-diff-port-038572
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-038572 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-038572 logs -n 25: (2.039509736s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-093052             | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-093052                  | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-093052 --memory=2200 --alsologtostderr   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-038572  | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC |                     |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-093052 image list                           | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-441627                  | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-620467        | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-823470            | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-038572       | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:24 UTC |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-620467             | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-823470                 | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC | 19 Aug 24 13:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 13:17:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 13:17:40.922225 2153523 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:17:40.922462 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922470 2153523 out.go:358] Setting ErrFile to fd 2...
	I0819 13:17:40.922475 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922631 2153523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:17:40.923154 2153523 out.go:352] Setting JSON to false
	I0819 13:17:40.924134 2153523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":165611,"bootTime":1723907850,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:17:40.924233 2153523 start.go:139] virtualization: kvm guest
	I0819 13:17:40.927254 2153523 out.go:177] * [embed-certs-823470] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:17:40.928415 2153523 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:17:40.928457 2153523 notify.go:220] Checking for updates...
	I0819 13:17:40.930421 2153523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:17:40.931440 2153523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:17:40.932454 2153523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:17:40.933815 2153523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:17:40.935189 2153523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:17:40.936761 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:17:40.937143 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.937209 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.952461 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0819 13:17:40.952868 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.953423 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.953455 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.953809 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.954009 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:40.954280 2153523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:17:40.954572 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.954610 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.970308 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0819 13:17:40.970901 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.971446 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.971478 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.971798 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.972002 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:41.006606 2153523 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 13:17:41.007689 2153523 start.go:297] selected driver: kvm2
	I0819 13:17:41.007712 2153523 start.go:901] validating driver "kvm2" against &{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.007878 2153523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:17:41.008964 2153523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.009068 2153523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:17:41.024433 2153523 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:17:41.024970 2153523 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:17:41.025027 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:17:41.025044 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:17:41.025112 2153523 start.go:340] cluster config:
	{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.025259 2153523 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.027074 2153523 out.go:177] * Starting "embed-certs-823470" primary control-plane node in "embed-certs-823470" cluster
	I0819 13:17:37.004434 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:41.028219 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:17:41.028273 2153523 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:17:41.028296 2153523 cache.go:56] Caching tarball of preloaded images
	I0819 13:17:41.028407 2153523 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:17:41.028424 2153523 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 13:17:41.028580 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:17:41.028861 2153523 start.go:360] acquireMachinesLock for embed-certs-823470: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:17:43.084440 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:46.156450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:52.236466 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:55.308454 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:01.388472 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:04.460482 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:10.540493 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:13.612449 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:19.692412 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:22.764436 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:28.844443 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:31.916476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:37.996470 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:41.068499 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:47.148488 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:50.220507 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:56.300450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:59.372511 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:05.452504 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:08.524521 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:14.604456 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:17.676406 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:23.756476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:26.828515 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:32.908445 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:35.980361 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:38.985083 2152365 start.go:364] duration metric: took 4m0.580875322s to acquireMachinesLock for "default-k8s-diff-port-038572"
	I0819 13:19:38.985152 2152365 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:38.985171 2152365 fix.go:54] fixHost starting: 
	I0819 13:19:38.985542 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:38.985578 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:39.001015 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I0819 13:19:39.001515 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:39.002024 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:19:39.002048 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:39.002379 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:39.002572 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:39.002734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:19:39.004320 2152365 fix.go:112] recreateIfNeeded on default-k8s-diff-port-038572: state=Stopped err=<nil>
	I0819 13:19:39.004361 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	W0819 13:19:39.004545 2152365 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:39.006073 2152365 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-038572" ...
	I0819 13:19:38.982266 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:38.982325 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982657 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:19:38.982688 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982924 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:19:38.984929 2151869 machine.go:96] duration metric: took 4m37.397817873s to provisionDockerMachine
	I0819 13:19:38.984974 2151869 fix.go:56] duration metric: took 4m37.421262151s for fixHost
	I0819 13:19:38.984981 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 4m37.421326953s
	W0819 13:19:38.985006 2151869 start.go:714] error starting host: provision: host is not running
	W0819 13:19:38.985119 2151869 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 13:19:38.985132 2151869 start.go:729] Will try again in 5 seconds ...
	I0819 13:19:39.007314 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Start
	I0819 13:19:39.007497 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring networks are active...
	I0819 13:19:39.008358 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network default is active
	I0819 13:19:39.008748 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network mk-default-k8s-diff-port-038572 is active
	I0819 13:19:39.009091 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Getting domain xml...
	I0819 13:19:39.009732 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Creating domain...
	I0819 13:19:40.228941 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting to get IP...
	I0819 13:19:40.229930 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230312 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230369 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.230279 2153980 retry.go:31] will retry after 222.94238ms: waiting for machine to come up
	I0819 13:19:40.455010 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455640 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.455562 2153980 retry.go:31] will retry after 314.241366ms: waiting for machine to come up
	I0819 13:19:40.771105 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771564 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771593 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.771511 2153980 retry.go:31] will retry after 475.191571ms: waiting for machine to come up
	I0819 13:19:41.248037 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248436 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248462 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.248377 2153980 retry.go:31] will retry after 547.125638ms: waiting for machine to come up
	I0819 13:19:41.797263 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797702 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.797653 2153980 retry.go:31] will retry after 581.6163ms: waiting for machine to come up
	I0819 13:19:42.380743 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381257 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:42.381195 2153980 retry.go:31] will retry after 661.080111ms: waiting for machine to come up
	I0819 13:19:43.044143 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044615 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044645 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.044566 2153980 retry.go:31] will retry after 815.359371ms: waiting for machine to come up
	I0819 13:19:43.986791 2151869 start.go:360] acquireMachinesLock for no-preload-441627: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:19:43.861792 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862277 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.862175 2153980 retry.go:31] will retry after 1.076851262s: waiting for machine to come up
	I0819 13:19:44.940944 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941544 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:44.941431 2153980 retry.go:31] will retry after 1.492394345s: waiting for machine to come up
	I0819 13:19:46.436200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436720 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:46.436617 2153980 retry.go:31] will retry after 1.567728586s: waiting for machine to come up
	I0819 13:19:48.006390 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006917 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:48.006810 2153980 retry.go:31] will retry after 1.799313771s: waiting for machine to come up
	I0819 13:19:49.807716 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808103 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808135 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:49.808029 2153980 retry.go:31] will retry after 2.910106345s: waiting for machine to come up
	I0819 13:19:52.722235 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722644 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:52.722554 2153980 retry.go:31] will retry after 2.846975794s: waiting for machine to come up
	I0819 13:19:56.801408 2152780 start.go:364] duration metric: took 3m10.884201023s to acquireMachinesLock for "old-k8s-version-620467"
	I0819 13:19:56.801488 2152780 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:56.801498 2152780 fix.go:54] fixHost starting: 
	I0819 13:19:56.802005 2152780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:56.802046 2152780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:56.819893 2152780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0819 13:19:56.820323 2152780 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:56.820785 2152780 main.go:141] libmachine: Using API Version  1
	I0819 13:19:56.820813 2152780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:56.821126 2152780 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:56.821314 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:19:56.821452 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetState
	I0819 13:19:56.823015 2152780 fix.go:112] recreateIfNeeded on old-k8s-version-620467: state=Stopped err=<nil>
	I0819 13:19:56.823056 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	W0819 13:19:56.823300 2152780 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:56.824822 2152780 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-620467" ...
	I0819 13:19:55.571761 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572317 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Found IP for machine: 192.168.72.69
	I0819 13:19:55.572351 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has current primary IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserving static IP address...
	I0819 13:19:55.572819 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.572863 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | skip adding static IP to network mk-default-k8s-diff-port-038572 - found existing host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"}
	I0819 13:19:55.572879 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserved static IP address: 192.168.72.69
	I0819 13:19:55.572897 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for SSH to be available...
	I0819 13:19:55.572914 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Getting to WaitForSSH function...
	I0819 13:19:55.575208 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575636 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.575662 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575751 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH client type: external
	I0819 13:19:55.575797 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa (-rw-------)
	I0819 13:19:55.575834 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:19:55.575851 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | About to run SSH command:
	I0819 13:19:55.575895 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | exit 0
	I0819 13:19:55.696082 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | SSH cmd err, output: <nil>: 
	I0819 13:19:55.696501 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetConfigRaw
	I0819 13:19:55.697226 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:55.699788 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700112 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.700146 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700429 2152365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/config.json ...
	I0819 13:19:55.700717 2152365 machine.go:93] provisionDockerMachine start ...
	I0819 13:19:55.700742 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:55.700975 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.703239 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703521 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.703556 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.703887 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704055 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.704360 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.704635 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.704651 2152365 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:19:55.804602 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:19:55.804632 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.804922 2152365 buildroot.go:166] provisioning hostname "default-k8s-diff-port-038572"
	I0819 13:19:55.804951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.805155 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.807793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.808144 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808318 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.808510 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808641 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808750 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.808888 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.809108 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.809123 2152365 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-038572 && echo "default-k8s-diff-port-038572" | sudo tee /etc/hostname
	I0819 13:19:55.922378 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-038572
	
	I0819 13:19:55.922428 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.925767 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926131 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.926172 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926350 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.926563 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926747 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.927130 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.927403 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.927433 2152365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-038572' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-038572/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-038572' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:19:56.033031 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:56.033082 2152365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:19:56.033108 2152365 buildroot.go:174] setting up certificates
	I0819 13:19:56.033121 2152365 provision.go:84] configureAuth start
	I0819 13:19:56.033132 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:56.033450 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.036371 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.036898 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.036926 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.037063 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.039494 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039814 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.039844 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039962 2152365 provision.go:143] copyHostCerts
	I0819 13:19:56.040055 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:19:56.040073 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:19:56.040181 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:19:56.040329 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:19:56.040342 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:19:56.040394 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:19:56.040582 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:19:56.040611 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:19:56.040666 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:19:56.040752 2152365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-038572 san=[127.0.0.1 192.168.72.69 default-k8s-diff-port-038572 localhost minikube]
	I0819 13:19:56.155224 2152365 provision.go:177] copyRemoteCerts
	I0819 13:19:56.155322 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:19:56.155366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.157977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158360 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.158393 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158573 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.158780 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.159002 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.159203 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.238700 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:19:56.263555 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0819 13:19:56.288212 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:19:56.311817 2152365 provision.go:87] duration metric: took 278.679594ms to configureAuth
	I0819 13:19:56.311851 2152365 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:19:56.312067 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:19:56.312187 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.315149 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315598 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.315630 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315801 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.315977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316115 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316244 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.316426 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.316653 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.316770 2152365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:19:56.575936 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:19:56.575977 2152365 machine.go:96] duration metric: took 875.241726ms to provisionDockerMachine
	I0819 13:19:56.576012 2152365 start.go:293] postStartSetup for "default-k8s-diff-port-038572" (driver="kvm2")
	I0819 13:19:56.576030 2152365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:19:56.576068 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.576426 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:19:56.576468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.579017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579420 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.579453 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579553 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.579726 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.579891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.580030 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.661017 2152365 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:19:56.665287 2152365 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:19:56.665315 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:19:56.665387 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:19:56.665469 2152365 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:19:56.665559 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:19:56.675115 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:19:56.698728 2152365 start.go:296] duration metric: took 122.697439ms for postStartSetup
	I0819 13:19:56.698771 2152365 fix.go:56] duration metric: took 17.713602497s for fixHost
	I0819 13:19:56.698793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.701384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701733 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.701766 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.702116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702284 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702444 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.702625 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.702822 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.702836 2152365 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:19:56.801168 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073596.772802095
	
	I0819 13:19:56.801210 2152365 fix.go:216] guest clock: 1724073596.772802095
	I0819 13:19:56.801223 2152365 fix.go:229] Guest: 2024-08-19 13:19:56.772802095 +0000 UTC Remote: 2024-08-19 13:19:56.698775345 +0000 UTC m=+258.438276725 (delta=74.02675ms)
	I0819 13:19:56.801278 2152365 fix.go:200] guest clock delta is within tolerance: 74.02675ms
	I0819 13:19:56.801287 2152365 start.go:83] releasing machines lock for "default-k8s-diff-port-038572", held for 17.81615508s
	I0819 13:19:56.801326 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.801669 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.804805 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805198 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.805242 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805377 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806035 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806291 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806387 2152365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:19:56.806442 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.806562 2152365 ssh_runner.go:195] Run: cat /version.json
	I0819 13:19:56.806592 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.809097 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809446 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.809476 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809513 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809599 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.809776 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.809951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810028 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.810053 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.810115 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.810256 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.810456 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.810627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810777 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.905573 2152365 ssh_runner.go:195] Run: systemctl --version
	I0819 13:19:56.911395 2152365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:19:57.053460 2152365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:19:57.060012 2152365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:19:57.060091 2152365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:19:57.076651 2152365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:19:57.076678 2152365 start.go:495] detecting cgroup driver to use...
	I0819 13:19:57.076746 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:19:57.095982 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:19:57.109198 2152365 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:19:57.109282 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:19:57.124552 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:19:57.141665 2152365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:19:57.250580 2152365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:19:57.395902 2152365 docker.go:233] disabling docker service ...
	I0819 13:19:57.395986 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:19:57.411232 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:19:57.424263 2152365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:19:57.576953 2152365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:19:57.695946 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:19:57.709501 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:19:57.727815 2152365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:19:57.727889 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.738210 2152365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:19:57.738284 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.748534 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.759949 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.770744 2152365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:19:57.781844 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.792047 2152365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.810916 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.826741 2152365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:19:57.839687 2152365 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:19:57.839753 2152365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:19:57.855192 2152365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:19:57.865326 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:19:57.991646 2152365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:19:58.136425 2152365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:19:58.136522 2152365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:19:58.141971 2152365 start.go:563] Will wait 60s for crictl version
	I0819 13:19:58.142047 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:19:58.146029 2152365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:19:58.182474 2152365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:19:58.182580 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.212887 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.242578 2152365 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:19:58.243711 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:58.246652 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.246985 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:58.247031 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.247262 2152365 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 13:19:58.251445 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:19:58.266177 2152365 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:19:58.266324 2152365 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:19:58.266417 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:19:56.826004 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .Start
	I0819 13:19:56.826161 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring networks are active...
	I0819 13:19:56.826953 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network default is active
	I0819 13:19:56.827328 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network mk-old-k8s-version-620467 is active
	I0819 13:19:56.827789 2152780 main.go:141] libmachine: (old-k8s-version-620467) Getting domain xml...
	I0819 13:19:56.828485 2152780 main.go:141] libmachine: (old-k8s-version-620467) Creating domain...
	I0819 13:19:58.099058 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting to get IP...
	I0819 13:19:58.099866 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.100256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.100359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.100246 2154107 retry.go:31] will retry after 217.468132ms: waiting for machine to come up
	I0819 13:19:58.319821 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.320335 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.320359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.320295 2154107 retry.go:31] will retry after 270.947251ms: waiting for machine to come up
	I0819 13:19:58.592897 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.593586 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.593610 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.593526 2154107 retry.go:31] will retry after 389.864812ms: waiting for machine to come up
	I0819 13:19:58.985269 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.985722 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.985757 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.985674 2154107 retry.go:31] will retry after 559.916587ms: waiting for machine to come up
	I0819 13:19:59.547846 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:59.548504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:59.548538 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:59.548446 2154107 retry.go:31] will retry after 759.198751ms: waiting for machine to come up
	I0819 13:20:00.309569 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:00.310093 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:00.310126 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:00.310041 2154107 retry.go:31] will retry after 836.488095ms: waiting for machine to come up
	I0819 13:19:58.306336 2152365 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:19:58.306490 2152365 ssh_runner.go:195] Run: which lz4
	I0819 13:19:58.311147 2152365 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:19:58.315715 2152365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:19:58.315751 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:19:59.751536 2152365 crio.go:462] duration metric: took 1.440444093s to copy over tarball
	I0819 13:19:59.751611 2152365 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:01.909604 2152365 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157959479s)
	I0819 13:20:01.909646 2152365 crio.go:469] duration metric: took 2.158078046s to extract the tarball
	I0819 13:20:01.909667 2152365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:01.954789 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:02.002755 2152365 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:02.002786 2152365 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:02.002805 2152365 kubeadm.go:934] updating node { 192.168.72.69 8444 v1.31.0 crio true true} ...
	I0819 13:20:02.002970 2152365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-038572 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:02.003061 2152365 ssh_runner.go:195] Run: crio config
	I0819 13:20:02.061207 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:02.061238 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:02.061256 2152365 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:02.061295 2152365 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.69 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-038572 NodeName:default-k8s-diff-port-038572 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:02.061496 2152365 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.69
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-038572"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:02.061628 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:02.072573 2152365 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:02.072676 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:02.083114 2152365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0819 13:20:02.100736 2152365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:02.117359 2152365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 13:20:02.135642 2152365 ssh_runner.go:195] Run: grep 192.168.72.69	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:02.139682 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:02.152242 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:02.278949 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:02.296896 2152365 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572 for IP: 192.168.72.69
	I0819 13:20:02.296925 2152365 certs.go:194] generating shared ca certs ...
	I0819 13:20:02.296949 2152365 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:02.297166 2152365 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:02.297230 2152365 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:02.297248 2152365 certs.go:256] generating profile certs ...
	I0819 13:20:02.297375 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/client.key
	I0819 13:20:02.297441 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key.ca317f39
	I0819 13:20:02.297489 2152365 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key
	I0819 13:20:02.297641 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:02.297692 2152365 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:02.297705 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:02.297730 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:02.297752 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:02.297778 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:02.297821 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:02.298812 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:02.332660 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:02.376995 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:02.418243 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:02.455354 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 13:20:02.488839 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:02.514418 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:02.540797 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:20:02.565628 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:02.590617 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:02.614942 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:02.639455 2152365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:02.657597 2152365 ssh_runner.go:195] Run: openssl version
	I0819 13:20:02.663545 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:02.674396 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679061 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679138 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.685136 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:02.696428 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:02.707271 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.711988 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.712052 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.717933 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:02.728982 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:02.739866 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744728 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744799 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.750660 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:02.761203 2152365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:02.765996 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:02.772371 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:02.778453 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:02.784645 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:02.790342 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:02.795882 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:02.801355 2152365 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:02.801448 2152365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:02.801536 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.839586 2152365 cri.go:89] found id: ""
	I0819 13:20:02.839684 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:02.850336 2152365 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:02.850363 2152365 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:02.850424 2152365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:02.860676 2152365 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:02.861753 2152365 kubeconfig.go:125] found "default-k8s-diff-port-038572" server: "https://192.168.72.69:8444"
	I0819 13:20:02.863979 2152365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:02.875284 2152365 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.69
	I0819 13:20:02.875321 2152365 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:02.875337 2152365 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:02.875398 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.914277 2152365 cri.go:89] found id: ""
	I0819 13:20:02.914383 2152365 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:02.931976 2152365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:02.941897 2152365 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:02.941924 2152365 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:02.941994 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 13:20:02.951103 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:02.951179 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:02.960642 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 13:20:02.970204 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:02.970278 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:02.980235 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.989306 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:02.989371 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.998821 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 13:20:03.007907 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:03.007978 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:03.017698 2152365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:03.027048 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:03.158491 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:01.148388 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:01.148845 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:01.148874 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:01.148801 2154107 retry.go:31] will retry after 1.045849266s: waiting for machine to come up
	I0819 13:20:02.196768 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:02.197307 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:02.197341 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:02.197243 2154107 retry.go:31] will retry after 1.09397017s: waiting for machine to come up
	I0819 13:20:03.292733 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:03.293195 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:03.293233 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:03.293141 2154107 retry.go:31] will retry after 1.29393597s: waiting for machine to come up
	I0819 13:20:04.588965 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:04.589429 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:04.589463 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:04.589356 2154107 retry.go:31] will retry after 1.963033352s: waiting for machine to come up
	I0819 13:20:04.318435 2152365 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.159898195s)
	I0819 13:20:04.318475 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.536005 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.600370 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.701830 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:04.701954 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.202875 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.702925 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.718529 2152365 api_server.go:72] duration metric: took 1.01671668s to wait for apiserver process to appear ...
	I0819 13:20:05.718570 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:05.718598 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.393400 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.393440 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.393459 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.434960 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.434992 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.719450 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.725655 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:08.725689 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.218770 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.224234 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:09.224273 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.718902 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.727048 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:20:09.733609 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:09.733641 2152365 api_server.go:131] duration metric: took 4.015063389s to wait for apiserver health ...
	I0819 13:20:09.733652 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:09.733660 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:09.735422 2152365 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:06.553692 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:06.554176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:06.554208 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:06.554119 2154107 retry.go:31] will retry after 2.420122298s: waiting for machine to come up
	I0819 13:20:08.975682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:08.976129 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:08.976176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:08.976073 2154107 retry.go:31] will retry after 2.64333029s: waiting for machine to come up
	I0819 13:20:09.736901 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:09.747568 2152365 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:09.766156 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:09.777905 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:09.777944 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:09.777953 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:09.777963 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:09.777978 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:09.777990 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:20:09.778006 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:09.778017 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:09.778029 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:20:09.778040 2152365 system_pods.go:74] duration metric: took 11.855811ms to wait for pod list to return data ...
	I0819 13:20:09.778050 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:09.787946 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:09.787991 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:09.788010 2152365 node_conditions.go:105] duration metric: took 9.95462ms to run NodePressure ...
	I0819 13:20:09.788035 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:10.079231 2152365 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083839 2152365 kubeadm.go:739] kubelet initialised
	I0819 13:20:10.083865 2152365 kubeadm.go:740] duration metric: took 4.599976ms waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083876 2152365 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:10.089020 2152365 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.094701 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094738 2152365 pod_ready.go:82] duration metric: took 5.686754ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.094751 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094760 2152365 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.099257 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099292 2152365 pod_ready.go:82] duration metric: took 4.514562ms for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.099308 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099317 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.103745 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103778 2152365 pod_ready.go:82] duration metric: took 4.449246ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.103799 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103811 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.170603 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170636 2152365 pod_ready.go:82] duration metric: took 66.80935ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.170649 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170655 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.569837 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569869 2152365 pod_ready.go:82] duration metric: took 399.203757ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.569879 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569892 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.970515 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970553 2152365 pod_ready.go:82] duration metric: took 400.651091ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.970571 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970600 2152365 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:11.371258 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371306 2152365 pod_ready.go:82] duration metric: took 400.692872ms for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:11.371324 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371335 2152365 pod_ready.go:39] duration metric: took 1.287441542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:11.371365 2152365 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:20:11.387535 2152365 ops.go:34] apiserver oom_adj: -16
	I0819 13:20:11.387559 2152365 kubeadm.go:597] duration metric: took 8.537187666s to restartPrimaryControlPlane
	I0819 13:20:11.387568 2152365 kubeadm.go:394] duration metric: took 8.586226152s to StartCluster
	I0819 13:20:11.387585 2152365 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.387664 2152365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:11.389218 2152365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.389474 2152365 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:20:11.389580 2152365 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:20:11.389706 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:11.389711 2152365 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389755 2152365 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-038572"
	I0819 13:20:11.389764 2152365 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-038572"
	W0819 13:20:11.389767 2152365 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:20:11.389751 2152365 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389804 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.389806 2152365 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.389818 2152365 addons.go:243] addon metrics-server should already be in state true
	I0819 13:20:11.389827 2152365 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-038572"
	I0819 13:20:11.389849 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.390217 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390246 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390254 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390283 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390305 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390341 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.391417 2152365 out.go:177] * Verifying Kubernetes components...
	I0819 13:20:11.392645 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:11.406609 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0819 13:20:11.407081 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.407694 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.407723 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.408143 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.408421 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.411206 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0819 13:20:11.411468 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0819 13:20:11.411740 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.411900 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.412296 2152365 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.412319 2152365 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:20:11.412335 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412351 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.412393 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412482 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412508 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412744 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.412758 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.412793 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.412794 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.413326 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413350 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413374 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.413399 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.429280 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40269
	I0819 13:20:11.429783 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.430330 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.430354 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.430797 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.430981 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.433018 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.433214 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I0819 13:20:11.433651 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.433708 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0819 13:20:11.434138 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434152 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434199 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.434455 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.434636 2152365 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:11.434806 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434830 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434988 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.435019 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.435164 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.435375 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.435924 2152365 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.435944 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:20:11.435965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.437309 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.438592 2152365 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:20:11.439482 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.439584 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:20:11.439598 2152365 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:20:11.439617 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.439905 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.439943 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.440077 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.440243 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.440399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.440517 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.442627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.442965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.442990 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.443134 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.443274 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.443500 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.443627 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.457318 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0819 13:20:11.457870 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.458444 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.458481 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.458854 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.459017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.460870 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.461105 2152365 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.461121 2152365 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:20:11.461138 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.463902 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.464415 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464581 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.464764 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.464942 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.465089 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.620783 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:11.641158 2152365 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:11.745067 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.783221 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:20:11.783260 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:20:11.791134 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.848325 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:20:11.848359 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:20:11.955559 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:11.955670 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:20:12.022022 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:12.892823 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.101637914s)
	I0819 13:20:12.892896 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.892909 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.892966 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147858081s)
	I0819 13:20:12.893019 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893033 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893201 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893327 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893354 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893366 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893375 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893377 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893386 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893391 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893630 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893681 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893704 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893677 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893729 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893657 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.899954 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.899970 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.900202 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.900220 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.900223 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972429 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972465 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.972842 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972866 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.972882 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.972897 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.973181 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.973225 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.973235 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.973289 2152365 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-038572"
	I0819 13:20:12.975602 2152365 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:20:12.976614 2152365 addons.go:510] duration metric: took 1.587039687s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:20:15.704949 2153523 start.go:364] duration metric: took 2m34.676043713s to acquireMachinesLock for "embed-certs-823470"
	I0819 13:20:15.705034 2153523 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:15.705058 2153523 fix.go:54] fixHost starting: 
	I0819 13:20:15.705598 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:15.705649 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:15.726423 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0819 13:20:15.726876 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:15.727565 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:20:15.727603 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:15.727986 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:15.728273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:15.728457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:20:15.730123 2153523 fix.go:112] recreateIfNeeded on embed-certs-823470: state=Stopped err=<nil>
	I0819 13:20:15.730167 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	W0819 13:20:15.730354 2153523 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:15.732004 2153523 out.go:177] * Restarting existing kvm2 VM for "embed-certs-823470" ...
	I0819 13:20:11.621327 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:11.621751 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:11.621783 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:11.621716 2154107 retry.go:31] will retry after 2.756598641s: waiting for machine to come up
	I0819 13:20:14.379562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380111 2152780 main.go:141] libmachine: (old-k8s-version-620467) Found IP for machine: 192.168.39.238
	I0819 13:20:14.380135 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserving static IP address...
	I0819 13:20:14.380149 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has current primary IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380616 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.380648 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserved static IP address: 192.168.39.238
	I0819 13:20:14.380673 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | skip adding static IP to network mk-old-k8s-version-620467 - found existing host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"}
	I0819 13:20:14.380687 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Getting to WaitForSSH function...
	I0819 13:20:14.380699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting for SSH to be available...
	I0819 13:20:14.383088 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383484 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.383516 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383650 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH client type: external
	I0819 13:20:14.383682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa (-rw-------)
	I0819 13:20:14.383720 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:14.383731 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | About to run SSH command:
	I0819 13:20:14.383745 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | exit 0
	I0819 13:20:14.508402 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:14.508766 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetConfigRaw
	I0819 13:20:14.509427 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.512053 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.512573 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512759 2152780 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/config.json ...
	I0819 13:20:14.512944 2152780 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:14.512962 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:14.513159 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.515266 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515554 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.515588 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515743 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.515901 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516039 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516142 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.516308 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.516582 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.516597 2152780 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:14.628611 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:14.628650 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.628968 2152780 buildroot.go:166] provisioning hostname "old-k8s-version-620467"
	I0819 13:20:14.629003 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.629202 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.631844 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632148 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.632211 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632516 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.632706 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632870 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632975 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.633140 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.633383 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.633400 2152780 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-620467 && echo "old-k8s-version-620467" | sudo tee /etc/hostname
	I0819 13:20:14.759803 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-620467
	
	I0819 13:20:14.759840 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.763058 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763513 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.763553 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763807 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.764027 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764208 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764523 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.764729 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.764986 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.765013 2152780 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-620467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-620467/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-620467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:14.890791 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:14.890828 2152780 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:14.890861 2152780 buildroot.go:174] setting up certificates
	I0819 13:20:14.890877 2152780 provision.go:84] configureAuth start
	I0819 13:20:14.890890 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.891337 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.894171 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894527 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.894562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894695 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.897240 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.897791 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.897825 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.898011 2152780 provision.go:143] copyHostCerts
	I0819 13:20:14.898080 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:14.898104 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:14.898169 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:14.898317 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:14.898330 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:14.898360 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:14.898434 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:14.898444 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:14.898472 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:14.898542 2152780 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-620467 san=[127.0.0.1 192.168.39.238 localhost minikube old-k8s-version-620467]
	I0819 13:20:15.009796 2152780 provision.go:177] copyRemoteCerts
	I0819 13:20:15.009880 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:15.009931 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.012857 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013153 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.013187 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013486 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.013699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.013862 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.013990 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.104066 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:15.131682 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 13:20:15.158717 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:15.186486 2152780 provision.go:87] duration metric: took 295.591794ms to configureAuth
	I0819 13:20:15.186518 2152780 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:15.186739 2152780 config.go:182] Loaded profile config "old-k8s-version-620467": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 13:20:15.186833 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.189580 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.189997 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.190028 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.190212 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.190389 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190528 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190646 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.190784 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.190958 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.190973 2152780 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:15.461836 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:15.461866 2152780 machine.go:96] duration metric: took 948.910153ms to provisionDockerMachine
	I0819 13:20:15.461878 2152780 start.go:293] postStartSetup for "old-k8s-version-620467" (driver="kvm2")
	I0819 13:20:15.461894 2152780 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:15.461913 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.462313 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:15.462348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.465294 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465707 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.465743 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465949 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.466136 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.466302 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.466443 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.551428 2152780 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:15.556041 2152780 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:15.556063 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:15.556137 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:15.556267 2152780 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:15.556410 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:15.565971 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:15.592103 2152780 start.go:296] duration metric: took 130.206992ms for postStartSetup
	I0819 13:20:15.592153 2152780 fix.go:56] duration metric: took 18.790655174s for fixHost
	I0819 13:20:15.592201 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.595223 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595583 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.595623 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595769 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.596033 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596249 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596391 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.596578 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.596767 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.596780 2152780 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:15.704773 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073615.677156491
	
	I0819 13:20:15.704802 2152780 fix.go:216] guest clock: 1724073615.677156491
	I0819 13:20:15.704811 2152780 fix.go:229] Guest: 2024-08-19 13:20:15.677156491 +0000 UTC Remote: 2024-08-19 13:20:15.592158206 +0000 UTC m=+209.822213223 (delta=84.998285ms)
	I0819 13:20:15.704854 2152780 fix.go:200] guest clock delta is within tolerance: 84.998285ms
	I0819 13:20:15.704862 2152780 start.go:83] releasing machines lock for "old-k8s-version-620467", held for 18.903410478s
	I0819 13:20:15.704893 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.705211 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:15.708190 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708528 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.708558 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708733 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709538 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709631 2152780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:15.709682 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.709793 2152780 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:15.709821 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.712366 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712759 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.712787 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712818 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712943 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713113 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713309 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.713374 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.713407 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.713481 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.713573 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713714 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713872 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.714033 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.732993 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Start
	I0819 13:20:15.733180 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring networks are active...
	I0819 13:20:15.734170 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network default is active
	I0819 13:20:15.734566 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network mk-embed-certs-823470 is active
	I0819 13:20:15.734992 2153523 main.go:141] libmachine: (embed-certs-823470) Getting domain xml...
	I0819 13:20:15.735675 2153523 main.go:141] libmachine: (embed-certs-823470) Creating domain...
	I0819 13:20:15.818839 2152780 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:15.827091 2152780 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:15.989552 2152780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:15.995991 2152780 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:15.996067 2152780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:16.012594 2152780 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:16.012622 2152780 start.go:495] detecting cgroup driver to use...
	I0819 13:20:16.012691 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:16.030071 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:16.048651 2152780 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:16.048715 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:16.063500 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:16.078370 2152780 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:16.194353 2152780 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:16.347921 2152780 docker.go:233] disabling docker service ...
	I0819 13:20:16.347992 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:16.363528 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:16.377515 2152780 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:16.526660 2152780 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:16.672207 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:16.686323 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:16.707555 2152780 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 13:20:16.707634 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.718679 2152780 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:16.718775 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.729446 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.740145 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.751813 2152780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:16.767680 2152780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:16.779680 2152780 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:16.779754 2152780 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:16.798205 2152780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:16.812684 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:16.999666 2152780 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:17.159585 2152780 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:17.159673 2152780 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:17.165707 2152780 start.go:563] Will wait 60s for crictl version
	I0819 13:20:17.165781 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:17.170949 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:17.222170 2152780 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:17.222248 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.250627 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.278578 2152780 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 13:20:13.644589 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:15.645507 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:18.146247 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:17.279632 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:17.282855 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:17.283289 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283501 2152780 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:17.288251 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:17.301778 2152780 kubeadm.go:883] updating cluster {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:17.301950 2152780 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 13:20:17.302022 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:17.348389 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:17.348477 2152780 ssh_runner.go:195] Run: which lz4
	I0819 13:20:17.352759 2152780 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:17.357046 2152780 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:17.357075 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 13:20:19.106587 2152780 crio.go:462] duration metric: took 1.753868264s to copy over tarball
	I0819 13:20:19.106686 2152780 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:17.131501 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting to get IP...
	I0819 13:20:17.132786 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.133215 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.133322 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.133191 2154333 retry.go:31] will retry after 299.991818ms: waiting for machine to come up
	I0819 13:20:17.434764 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.435181 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.435207 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.435148 2154333 retry.go:31] will retry after 236.856343ms: waiting for machine to come up
	I0819 13:20:17.673895 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.674570 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.674599 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.674464 2154333 retry.go:31] will retry after 393.585081ms: waiting for machine to come up
	I0819 13:20:18.070571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.071042 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.071102 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.070993 2154333 retry.go:31] will retry after 544.808617ms: waiting for machine to come up
	I0819 13:20:18.617933 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.618430 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.618462 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.618383 2154333 retry.go:31] will retry after 679.633919ms: waiting for machine to come up
	I0819 13:20:19.299719 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.300300 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.300332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.300244 2154333 retry.go:31] will retry after 612.319499ms: waiting for machine to come up
	I0819 13:20:19.914955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.915514 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.915550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.915448 2154333 retry.go:31] will retry after 831.709673ms: waiting for machine to come up
	I0819 13:20:20.748907 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:20.749441 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:20.749470 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:20.749386 2154333 retry.go:31] will retry after 1.039050841s: waiting for machine to come up
	I0819 13:20:19.147118 2152365 node_ready.go:49] node "default-k8s-diff-port-038572" has status "Ready":"True"
	I0819 13:20:19.147148 2152365 node_ready.go:38] duration metric: took 7.505954816s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:19.147159 2152365 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:19.156331 2152365 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.164963 2152365 pod_ready.go:93] pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:19.164994 2152365 pod_ready.go:82] duration metric: took 8.624394ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.165007 2152365 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:21.171762 2152365 pod_ready.go:103] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:22.814104 2152365 pod_ready.go:93] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.814140 2152365 pod_ready.go:82] duration metric: took 3.649123155s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.814157 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820468 2152365 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.820494 2152365 pod_ready.go:82] duration metric: took 6.328831ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820505 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826256 2152365 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.826288 2152365 pod_ready.go:82] duration metric: took 5.775237ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826302 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832133 2152365 pod_ready.go:93] pod "kube-proxy-wnkk2" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.832182 2152365 pod_ready.go:82] duration metric: took 5.850742ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832198 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837910 2152365 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.837938 2152365 pod_ready.go:82] duration metric: took 5.730926ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837952 2152365 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.098419 2152780 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.991694091s)
	I0819 13:20:22.098466 2152780 crio.go:469] duration metric: took 2.991838198s to extract the tarball
	I0819 13:20:22.098476 2152780 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:22.143215 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:22.183884 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:22.183917 2152780 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:22.183992 2152780 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.184012 2152780 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.184040 2152780 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 13:20:22.184050 2152780 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.183999 2152780 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.184002 2152780 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.184087 2152780 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.184133 2152780 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185819 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.185861 2152780 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 13:20:22.185868 2152780 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.185909 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.185940 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185825 2152780 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.355834 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.356969 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.361175 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.382402 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.393972 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.417644 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 13:20:22.451269 2152780 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 13:20:22.451335 2152780 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.451333 2152780 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 13:20:22.451374 2152780 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.451393 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.451428 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.479570 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.490369 2152780 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 13:20:22.490426 2152780 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.490484 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.519930 2152780 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 13:20:22.519990 2152780 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.520062 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.527794 2152780 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 13:20:22.527836 2152780 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.527922 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540287 2152780 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 13:20:22.540337 2152780 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 13:20:22.540362 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.540381 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540423 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665204 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.665271 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.665317 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.665410 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665441 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.665481 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.725932 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.821227 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.825977 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.826179 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.826276 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.826365 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.826517 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.908181 2152780 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 13:20:22.908249 2152780 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.908306 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.974136 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:23.008192 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 13:20:23.008282 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:23.008318 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:23.008372 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 13:20:23.008406 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:23.008416 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.041323 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 13:20:23.119173 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 13:20:23.119212 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.119216 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 13:20:23.119787 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 13:20:23.158596 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.193414 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 13:20:23.193510 2152780 cache_images.go:92] duration metric: took 1.009576721s to LoadCachedImages
	W0819 13:20:23.193613 2152780 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0819 13:20:23.193632 2152780 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.20.0 crio true true} ...
	I0819 13:20:23.193746 2152780 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-620467 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:23.193835 2152780 ssh_runner.go:195] Run: crio config
	I0819 13:20:23.255630 2152780 cni.go:84] Creating CNI manager for ""
	I0819 13:20:23.255661 2152780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:23.255677 2152780 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:23.255711 2152780 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-620467 NodeName:old-k8s-version-620467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 13:20:23.255897 2152780 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-620467"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:23.255980 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 13:20:23.269531 2152780 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:23.269612 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:23.282542 2152780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0819 13:20:23.302755 2152780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:23.322417 2152780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 13:20:23.342984 2152780 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:23.347053 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:23.360095 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:23.497495 2152780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:23.517984 2152780 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467 for IP: 192.168.39.238
	I0819 13:20:23.518059 2152780 certs.go:194] generating shared ca certs ...
	I0819 13:20:23.518125 2152780 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:23.518359 2152780 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:23.518429 2152780 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:23.518442 2152780 certs.go:256] generating profile certs ...
	I0819 13:20:23.518572 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.key
	I0819 13:20:23.518651 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key.53723c14
	I0819 13:20:23.518701 2152780 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key
	I0819 13:20:23.518856 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:23.518895 2152780 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:23.518909 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:23.518941 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:23.518969 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:23.519009 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:23.519073 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:23.519828 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:23.574690 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:23.616732 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:23.659531 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:23.706724 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 13:20:23.742394 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:23.774793 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:23.801731 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:23.827344 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:23.855338 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:23.880700 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:23.906324 2152780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:23.923792 2152780 ssh_runner.go:195] Run: openssl version
	I0819 13:20:23.929722 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:23.940623 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946214 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946316 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.952580 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:23.963550 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:23.974639 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979429 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979515 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.985465 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:23.996822 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:24.008876 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013845 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013909 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.019736 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:24.032573 2152780 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:24.038620 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:24.046070 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:24.052573 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:24.059242 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:24.065487 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:24.072116 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:24.078756 2152780 kubeadm.go:392] StartCluster: {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:24.078878 2152780 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:24.078950 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.122078 2152780 cri.go:89] found id: ""
	I0819 13:20:24.122170 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:24.133909 2152780 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:24.133932 2152780 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:24.133989 2152780 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:24.144440 2152780 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:24.145449 2152780 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-620467" does not appear in /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:24.146113 2152780 kubeconfig.go:62] /home/jenkins/minikube-integration/19479-2086935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-620467" cluster setting kubeconfig missing "old-k8s-version-620467" context setting]
	I0819 13:20:24.147113 2152780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:24.226327 2152780 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:24.236890 2152780 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.238
	I0819 13:20:24.236922 2152780 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:24.236936 2152780 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:24.236978 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.274144 2152780 cri.go:89] found id: ""
	I0819 13:20:24.274220 2152780 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:24.291049 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:24.302678 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:24.302699 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:24.302746 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:24.312055 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:24.312122 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:24.321518 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:24.330507 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:24.330570 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:24.340192 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.349855 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:24.349914 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.359243 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:24.369020 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:24.369100 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:24.379184 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:24.389785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:24.524009 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.159266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.413924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.519529 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.635520 2152780 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:25.635627 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:21.790800 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:21.791370 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:21.791400 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:21.791324 2154333 retry.go:31] will retry after 1.360798876s: waiting for machine to come up
	I0819 13:20:23.153492 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:23.153913 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:23.153942 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:23.153859 2154333 retry.go:31] will retry after 2.180186803s: waiting for machine to come up
	I0819 13:20:25.337305 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:25.337871 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:25.337903 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:25.337719 2154333 retry.go:31] will retry after 1.933341809s: waiting for machine to come up
	I0819 13:20:24.846640 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:27.344995 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:26.135764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:26.636618 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.136024 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.636046 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.135975 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.636411 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.136478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.636585 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.136183 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.636717 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.273276 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:27.273766 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:27.273796 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:27.273706 2154333 retry.go:31] will retry after 3.242792205s: waiting for machine to come up
	I0819 13:20:30.520205 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:30.520644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:30.520679 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:30.520585 2154333 retry.go:31] will retry after 2.921954134s: waiting for machine to come up
	I0819 13:20:29.843436 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:31.843702 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.021186 2151869 start.go:364] duration metric: took 51.034331815s to acquireMachinesLock for "no-preload-441627"
	I0819 13:20:35.021266 2151869 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:35.021275 2151869 fix.go:54] fixHost starting: 
	I0819 13:20:35.021747 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:35.021788 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:35.039025 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0819 13:20:35.039480 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:35.039951 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:20:35.039978 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:35.040398 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:35.040610 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:35.040779 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:20:35.042642 2151869 fix.go:112] recreateIfNeeded on no-preload-441627: state=Stopped err=<nil>
	I0819 13:20:35.042670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	W0819 13:20:35.042852 2151869 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:35.044620 2151869 out.go:177] * Restarting existing kvm2 VM for "no-preload-441627" ...
	I0819 13:20:31.136686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:31.636239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.136372 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.635927 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.136039 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.635721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.136156 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.636069 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.136557 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.636478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.446149 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.446796 2153523 main.go:141] libmachine: (embed-certs-823470) Found IP for machine: 192.168.61.50
	I0819 13:20:33.446825 2153523 main.go:141] libmachine: (embed-certs-823470) Reserving static IP address...
	I0819 13:20:33.446843 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has current primary IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.447326 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.447357 2153523 main.go:141] libmachine: (embed-certs-823470) Reserved static IP address: 192.168.61.50
	I0819 13:20:33.447375 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | skip adding static IP to network mk-embed-certs-823470 - found existing host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"}
	I0819 13:20:33.447390 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Getting to WaitForSSH function...
	I0819 13:20:33.447402 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting for SSH to be available...
	I0819 13:20:33.449644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.449934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.449966 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.450063 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH client type: external
	I0819 13:20:33.450080 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa (-rw-------)
	I0819 13:20:33.450128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:33.450148 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | About to run SSH command:
	I0819 13:20:33.450160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | exit 0
	I0819 13:20:33.580225 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:33.580660 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetConfigRaw
	I0819 13:20:33.581310 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.583950 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584345 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.584382 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584697 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:20:33.584932 2153523 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:33.584961 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:33.585138 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.587429 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587735 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.587768 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587903 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.588123 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588333 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588507 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.588675 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.588873 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.588886 2153523 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:33.696497 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:33.696528 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696770 2153523 buildroot.go:166] provisioning hostname "embed-certs-823470"
	I0819 13:20:33.696788 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.699605 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.699955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.699991 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.700129 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.700374 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700531 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700668 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.700822 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.701007 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.701021 2153523 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-823470 && echo "embed-certs-823470" | sudo tee /etc/hostname
	I0819 13:20:33.826398 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-823470
	
	I0819 13:20:33.826444 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.829353 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.829751 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.829791 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.830030 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.830215 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830383 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830504 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.830662 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.830879 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.830896 2153523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-823470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-823470/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-823470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:33.952843 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:33.952881 2153523 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:33.952908 2153523 buildroot.go:174] setting up certificates
	I0819 13:20:33.952920 2153523 provision.go:84] configureAuth start
	I0819 13:20:33.952930 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.953214 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.955916 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956302 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.956332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956520 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.958787 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959101 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.959128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959257 2153523 provision.go:143] copyHostCerts
	I0819 13:20:33.959321 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:33.959341 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:33.959391 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:33.959505 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:33.959516 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:33.959537 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:33.959593 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:33.959600 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:33.959618 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:33.959670 2153523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.embed-certs-823470 san=[127.0.0.1 192.168.61.50 embed-certs-823470 localhost minikube]
	I0819 13:20:34.309596 2153523 provision.go:177] copyRemoteCerts
	I0819 13:20:34.309658 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:34.309693 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.312341 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312722 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.312757 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312897 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.313100 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.313221 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.313349 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.400305 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:34.423939 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:34.447427 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:34.474337 2153523 provision.go:87] duration metric: took 521.386141ms to configureAuth
	I0819 13:20:34.474374 2153523 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:34.474559 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:34.474655 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.477282 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477721 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.477754 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.478108 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478296 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478443 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.478661 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.478864 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.478886 2153523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:34.768186 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:34.768232 2153523 machine.go:96] duration metric: took 1.183282981s to provisionDockerMachine
	I0819 13:20:34.768244 2153523 start.go:293] postStartSetup for "embed-certs-823470" (driver="kvm2")
	I0819 13:20:34.768255 2153523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:34.768273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:34.768593 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:34.768626 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.771022 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771451 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.771485 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771636 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.771820 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.771979 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.772095 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.862868 2153523 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:34.867619 2153523 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:34.867659 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:34.867746 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:34.867837 2153523 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:34.867931 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:34.877409 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:34.907227 2153523 start.go:296] duration metric: took 138.963836ms for postStartSetup
	I0819 13:20:34.907285 2153523 fix.go:56] duration metric: took 19.202241044s for fixHost
	I0819 13:20:34.907315 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.909900 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910238 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.910273 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910466 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.910671 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910832 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910952 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.911132 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.911306 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.911316 2153523 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:35.020989 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073634.971972974
	
	I0819 13:20:35.021036 2153523 fix.go:216] guest clock: 1724073634.971972974
	I0819 13:20:35.021046 2153523 fix.go:229] Guest: 2024-08-19 13:20:34.971972974 +0000 UTC Remote: 2024-08-19 13:20:34.907292264 +0000 UTC m=+174.021390970 (delta=64.68071ms)
	I0819 13:20:35.021076 2153523 fix.go:200] guest clock delta is within tolerance: 64.68071ms
	I0819 13:20:35.021082 2153523 start.go:83] releasing machines lock for "embed-certs-823470", held for 19.316080353s
	I0819 13:20:35.021111 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.021414 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:35.024146 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024613 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.024651 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024793 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025291 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025478 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025561 2153523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:35.025612 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.025736 2153523 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:35.025759 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.028550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028973 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029003 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029031 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029062 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029272 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029359 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029551 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029597 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029723 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.029741 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029893 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.137494 2153523 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:35.145726 2153523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:35.296420 2153523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:35.303481 2153523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:35.303566 2153523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:35.323781 2153523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:35.323813 2153523 start.go:495] detecting cgroup driver to use...
	I0819 13:20:35.323893 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:35.343648 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:35.357864 2153523 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:35.357920 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:35.371911 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:35.388107 2153523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:35.496871 2153523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:35.635989 2153523 docker.go:233] disabling docker service ...
	I0819 13:20:35.636061 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:35.651531 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:35.665374 2153523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:35.810969 2153523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:35.045992 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Start
	I0819 13:20:35.046198 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring networks are active...
	I0819 13:20:35.046943 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network default is active
	I0819 13:20:35.047261 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network mk-no-preload-441627 is active
	I0819 13:20:35.047846 2151869 main.go:141] libmachine: (no-preload-441627) Getting domain xml...
	I0819 13:20:35.048719 2151869 main.go:141] libmachine: (no-preload-441627) Creating domain...
	I0819 13:20:36.329569 2151869 main.go:141] libmachine: (no-preload-441627) Waiting to get IP...
	I0819 13:20:36.330479 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.330862 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.330953 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.330842 2154491 retry.go:31] will retry after 214.344142ms: waiting for machine to come up
	I0819 13:20:35.931109 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:35.945746 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:35.963978 2153523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:35.964062 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.973860 2153523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:35.973944 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.983833 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.993733 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.003724 2153523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:36.014327 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.024944 2153523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.043960 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.054444 2153523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:36.064239 2153523 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:36.064325 2153523 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:36.082508 2153523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:36.092677 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:36.216592 2153523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:36.368516 2153523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:36.368599 2153523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:36.373579 2153523 start.go:563] Will wait 60s for crictl version
	I0819 13:20:36.373630 2153523 ssh_runner.go:195] Run: which crictl
	I0819 13:20:36.377433 2153523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:36.416842 2153523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:36.416933 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.445307 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.480522 2153523 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:33.845335 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.847273 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:36.135939 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.136635 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.635748 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.136679 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.636322 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.136560 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.635772 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.135797 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.635764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.481770 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:36.485210 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485665 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:36.485697 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485956 2153523 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:36.490383 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:36.504020 2153523 kubeadm.go:883] updating cluster {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:36.504151 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:36.504236 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:36.540784 2153523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:36.540871 2153523 ssh_runner.go:195] Run: which lz4
	I0819 13:20:36.545693 2153523 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:36.550278 2153523 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:36.550307 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:20:38.008380 2153523 crio.go:462] duration metric: took 1.462725002s to copy over tarball
	I0819 13:20:38.008468 2153523 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:40.267095 2153523 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258591648s)
	I0819 13:20:40.267126 2153523 crio.go:469] duration metric: took 2.258713555s to extract the tarball
	I0819 13:20:40.267133 2153523 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:40.307080 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:40.350343 2153523 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:40.350368 2153523 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:40.350377 2153523 kubeadm.go:934] updating node { 192.168.61.50 8443 v1.31.0 crio true true} ...
	I0819 13:20:40.350504 2153523 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-823470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:40.350594 2153523 ssh_runner.go:195] Run: crio config
	I0819 13:20:40.397534 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:40.397560 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:40.397582 2153523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:40.397612 2153523 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.50 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-823470 NodeName:embed-certs-823470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:40.397804 2153523 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-823470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:40.397880 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:40.409664 2153523 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:40.409738 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:40.420526 2153523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:20:40.437016 2153523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:40.454222 2153523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 13:20:40.471532 2153523 ssh_runner.go:195] Run: grep 192.168.61.50	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:40.475436 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:40.487570 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:40.612612 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:40.630016 2153523 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470 for IP: 192.168.61.50
	I0819 13:20:40.630049 2153523 certs.go:194] generating shared ca certs ...
	I0819 13:20:40.630073 2153523 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:40.630265 2153523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:40.630329 2153523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:40.630343 2153523 certs.go:256] generating profile certs ...
	I0819 13:20:40.630457 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/client.key
	I0819 13:20:40.630553 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key.208ee791
	I0819 13:20:40.630606 2153523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key
	I0819 13:20:40.630756 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:40.630812 2153523 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:40.630824 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:40.630859 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:40.630891 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:40.630915 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:40.630959 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:40.631651 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:40.660149 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:40.699120 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:40.726336 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:40.765233 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 13:20:40.804970 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 13:20:40.846737 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:40.871358 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:40.896688 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:40.920071 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:36.547335 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.547819 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.547855 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.547769 2154491 retry.go:31] will retry after 344.710336ms: waiting for machine to come up
	I0819 13:20:36.894655 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.895213 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.895249 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.895169 2154491 retry.go:31] will retry after 301.307077ms: waiting for machine to come up
	I0819 13:20:37.197664 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.198250 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.198286 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.198182 2154491 retry.go:31] will retry after 516.927721ms: waiting for machine to come up
	I0819 13:20:37.716831 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.717323 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.717359 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.717279 2154491 retry.go:31] will retry after 679.982986ms: waiting for machine to come up
	I0819 13:20:38.398678 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:38.399120 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:38.399157 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:38.399072 2154491 retry.go:31] will retry after 822.30179ms: waiting for machine to come up
	I0819 13:20:39.223045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:39.223575 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:39.223609 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:39.223519 2154491 retry.go:31] will retry after 936.040478ms: waiting for machine to come up
	I0819 13:20:40.161047 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:40.161593 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:40.161624 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:40.161538 2154491 retry.go:31] will retry after 1.301988289s: waiting for machine to come up
	I0819 13:20:38.347577 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:40.845179 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:42.845598 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:41.136326 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:41.635949 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.136274 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.636124 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.135998 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.636101 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.136242 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.635713 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.135796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.635702 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.943162 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:40.969648 2153523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:40.988367 2153523 ssh_runner.go:195] Run: openssl version
	I0819 13:20:40.994353 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:41.006057 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010833 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010908 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.016665 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:41.029047 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:41.040384 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044690 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044755 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.050358 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:41.061626 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:41.073480 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077867 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077941 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.083685 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:41.096051 2153523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:41.100589 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:41.106791 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:41.112757 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:41.119083 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:41.125060 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:41.130528 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:41.136615 2153523 kubeadm.go:392] StartCluster: {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:41.136781 2153523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:41.136848 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.184735 2153523 cri.go:89] found id: ""
	I0819 13:20:41.184820 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:41.195134 2153523 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:41.195162 2153523 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:41.195215 2153523 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:41.205105 2153523 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:41.206197 2153523 kubeconfig.go:125] found "embed-certs-823470" server: "https://192.168.61.50:8443"
	I0819 13:20:41.208434 2153523 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:41.218028 2153523 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.50
	I0819 13:20:41.218059 2153523 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:41.218073 2153523 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:41.218128 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.256254 2153523 cri.go:89] found id: ""
	I0819 13:20:41.256344 2153523 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:41.273223 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:41.283149 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:41.283169 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:41.283228 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:41.292541 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:41.292624 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:41.302453 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:41.312032 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:41.312090 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:41.322411 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.331738 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:41.331817 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.342148 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:41.353410 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:41.353506 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:41.365035 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:41.378556 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:41.502298 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.511293 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.008951961s)
	I0819 13:20:42.511332 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.728318 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.806475 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.889810 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:42.889917 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.390790 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.890775 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.939414 2153523 api_server.go:72] duration metric: took 1.049613567s to wait for apiserver process to appear ...
	I0819 13:20:43.939456 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:43.939495 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:43.940151 2153523 api_server.go:269] stopped: https://192.168.61.50:8443/healthz: Get "https://192.168.61.50:8443/healthz": dial tcp 192.168.61.50:8443: connect: connection refused
	I0819 13:20:44.439965 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:41.465077 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:41.465533 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:41.465577 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:41.465512 2154491 retry.go:31] will retry after 1.34269663s: waiting for machine to come up
	I0819 13:20:42.810485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:42.810988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:42.811017 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:42.810935 2154491 retry.go:31] will retry after 2.118455014s: waiting for machine to come up
	I0819 13:20:44.930721 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:44.931183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:44.931225 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:44.931151 2154491 retry.go:31] will retry after 2.224941461s: waiting for machine to come up
	I0819 13:20:44.846237 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:47.344303 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:46.957397 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.957427 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:46.957459 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:46.997895 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.997932 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:47.440580 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.444945 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.444984 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:47.939553 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.946543 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.946576 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.440206 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.446075 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.446114 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.939602 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.944480 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.944526 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.440053 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.444461 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.444502 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.940036 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.944425 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.944464 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:50.440006 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:50.445299 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:20:50.451900 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:50.451930 2153523 api_server.go:131] duration metric: took 6.512455813s to wait for apiserver health ...
	I0819 13:20:50.451941 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:50.451948 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:50.453835 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:46.135706 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:46.635815 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.136558 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.636410 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.136300 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.636143 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.136463 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.635682 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.135692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.635841 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.454977 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:50.466694 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:50.484966 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:50.507697 2153523 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:50.507738 2153523 system_pods.go:61] "coredns-6f6b679f8f-rxjvv" [7eae7e32-b4c3-4941-9241-16cdde1007fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:50.507750 2153523 system_pods.go:61] "etcd-embed-certs-823470" [e2684e0e-8c4a-40da-ae5b-aa7f3a077e45] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:50.507761 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [c438fbda-c1f4-49f8-8979-1194d91cdeab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:50.507770 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [528e4b3e-b3fd-4455-bb4f-79b4bbdcb434] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:50.507778 2153523 system_pods.go:61] "kube-proxy-jcnwd" [b5302230-969d-409e-8aba-98ff971e9e79] Running
	I0819 13:20:50.507787 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [213067c8-73c6-4eb8-b9dc-f4ea981421f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:50.507798 2153523 system_pods.go:61] "metrics-server-6867b74b74-8dwpf" [7e3ab377-075e-48d8-a9ce-191ba41e3ed7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:50.507812 2153523 system_pods.go:61] "storage-provisioner" [4dd080d9-174f-4c0c-9b06-39e98a71672b] Running
	I0819 13:20:50.507821 2153523 system_pods.go:74] duration metric: took 22.833545ms to wait for pod list to return data ...
	I0819 13:20:50.507831 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:50.512822 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:50.512850 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:50.512862 2153523 node_conditions.go:105] duration metric: took 5.023718ms to run NodePressure ...
	I0819 13:20:50.512877 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:50.853595 2153523 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859172 2153523 kubeadm.go:739] kubelet initialised
	I0819 13:20:50.859200 2153523 kubeadm.go:740] duration metric: took 5.576224ms waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859211 2153523 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:50.865571 2153523 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.871631 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871652 2153523 pod_ready.go:82] duration metric: took 6.059239ms for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.871660 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871666 2153523 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.875242 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875268 2153523 pod_ready.go:82] duration metric: took 3.594665ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.875279 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875287 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.879429 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879460 2153523 pod_ready.go:82] duration metric: took 4.156713ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.879471 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879485 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.887625 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887647 2153523 pod_ready.go:82] duration metric: took 8.149859ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.887655 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887662 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:47.157196 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:47.157589 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:47.157616 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:47.157532 2154491 retry.go:31] will retry after 3.412547504s: waiting for machine to come up
	I0819 13:20:50.571896 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:50.572460 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:50.572485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:50.572401 2154491 retry.go:31] will retry after 3.498419112s: waiting for machine to come up
	I0819 13:20:49.345181 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.346188 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.136307 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.136425 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.636034 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.136443 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.636129 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.135695 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.635860 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.136481 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.635655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.287880 2153523 pod_ready.go:93] pod "kube-proxy-jcnwd" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:51.287908 2153523 pod_ready.go:82] duration metric: took 400.238614ms for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:51.287922 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:53.294631 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.803650 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:54.073302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073795 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has current primary IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073819 2151869 main.go:141] libmachine: (no-preload-441627) Found IP for machine: 192.168.50.161
	I0819 13:20:54.073833 2151869 main.go:141] libmachine: (no-preload-441627) Reserving static IP address...
	I0819 13:20:54.074302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.074337 2151869 main.go:141] libmachine: (no-preload-441627) Reserved static IP address: 192.168.50.161
	I0819 13:20:54.074356 2151869 main.go:141] libmachine: (no-preload-441627) DBG | skip adding static IP to network mk-no-preload-441627 - found existing host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"}
	I0819 13:20:54.074382 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Getting to WaitForSSH function...
	I0819 13:20:54.074398 2151869 main.go:141] libmachine: (no-preload-441627) Waiting for SSH to be available...
	I0819 13:20:54.076312 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076686 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.076719 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076879 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH client type: external
	I0819 13:20:54.076920 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa (-rw-------)
	I0819 13:20:54.076961 2151869 main.go:141] libmachine: (no-preload-441627) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:54.076975 2151869 main.go:141] libmachine: (no-preload-441627) DBG | About to run SSH command:
	I0819 13:20:54.076988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | exit 0
	I0819 13:20:54.208334 2151869 main.go:141] libmachine: (no-preload-441627) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:54.208710 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetConfigRaw
	I0819 13:20:54.209373 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.211827 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212176 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.212210 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212409 2151869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/config.json ...
	I0819 13:20:54.212587 2151869 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:54.212607 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:54.212810 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.214965 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215290 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.215315 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215435 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.215615 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215735 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215887 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.216050 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.216244 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.216256 2151869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:54.332379 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:54.332411 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332718 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:20:54.332753 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332948 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.335520 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.335930 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.335959 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.336119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.336357 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336513 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336654 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.336918 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.337084 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.337104 2151869 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-441627 && echo "no-preload-441627" | sudo tee /etc/hostname
	I0819 13:20:54.466127 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-441627
	
	I0819 13:20:54.466178 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.469045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469434 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.469482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469648 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.469836 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470018 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470174 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.470406 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.470654 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.470676 2151869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-441627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-441627/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-441627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:54.595063 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:54.595111 2151869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:54.595162 2151869 buildroot.go:174] setting up certificates
	I0819 13:20:54.595178 2151869 provision.go:84] configureAuth start
	I0819 13:20:54.595191 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.595554 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.598703 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599031 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.599060 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599202 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.601785 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602203 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.602231 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602593 2151869 provision.go:143] copyHostCerts
	I0819 13:20:54.602661 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:54.602680 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:54.602745 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:54.602878 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:54.602892 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:54.602922 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:54.603010 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:54.603022 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:54.603043 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:54.603116 2151869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.no-preload-441627 san=[127.0.0.1 192.168.50.161 localhost minikube no-preload-441627]
	I0819 13:20:55.027338 2151869 provision.go:177] copyRemoteCerts
	I0819 13:20:55.027408 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:55.027438 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.030073 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.030550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030826 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.031038 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.031231 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.031436 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.122187 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:55.147222 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:55.170341 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:55.195484 2151869 provision.go:87] duration metric: took 600.289644ms to configureAuth
	I0819 13:20:55.195516 2151869 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:55.195757 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:55.195862 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.198859 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199267 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.199294 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199448 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.199666 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199834 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199941 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.200126 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.200340 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.200357 2151869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:55.485890 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:55.485920 2151869 machine.go:96] duration metric: took 1.273319956s to provisionDockerMachine
	I0819 13:20:55.485931 2151869 start.go:293] postStartSetup for "no-preload-441627" (driver="kvm2")
	I0819 13:20:55.485941 2151869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:55.485957 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.486340 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:55.486379 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.489099 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489514 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.489550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489657 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.489866 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.490073 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.490298 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.582461 2151869 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:55.587827 2151869 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:55.587857 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:55.587928 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:55.588015 2151869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:55.588115 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:55.600309 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:55.629229 2151869 start.go:296] duration metric: took 143.280186ms for postStartSetup
	I0819 13:20:55.629281 2151869 fix.go:56] duration metric: took 20.608007215s for fixHost
	I0819 13:20:55.629306 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.632299 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632733 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.632786 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632921 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.633155 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633422 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633640 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.633850 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.634098 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.634117 2151869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:55.752998 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073655.727816487
	
	I0819 13:20:55.753028 2151869 fix.go:216] guest clock: 1724073655.727816487
	I0819 13:20:55.753039 2151869 fix.go:229] Guest: 2024-08-19 13:20:55.727816487 +0000 UTC Remote: 2024-08-19 13:20:55.629285878 +0000 UTC m=+354.230914724 (delta=98.530609ms)
	I0819 13:20:55.753080 2151869 fix.go:200] guest clock delta is within tolerance: 98.530609ms
	I0819 13:20:55.753087 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 20.731851858s
	I0819 13:20:55.753119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.753381 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:55.756443 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.756954 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.756986 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.757124 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757639 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757822 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757927 2151869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:55.757999 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.758066 2151869 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:55.758098 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.760779 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761023 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761162 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761338 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761459 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761515 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761756 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.761845 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761895 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.762041 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.762192 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.846241 2151869 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:55.868048 2151869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:56.013592 2151869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:56.020322 2151869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:56.020403 2151869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:56.045197 2151869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:56.045234 2151869 start.go:495] detecting cgroup driver to use...
	I0819 13:20:56.045325 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:56.062336 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:56.078399 2151869 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:56.078476 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:56.094735 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:56.110097 2151869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:56.238433 2151869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:56.405540 2151869 docker.go:233] disabling docker service ...
	I0819 13:20:56.405638 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:56.421604 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:56.434166 2151869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:56.565116 2151869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:56.688471 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:56.709293 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:56.727575 2151869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:56.727658 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.738492 2151869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:56.738559 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.748563 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.758372 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.768341 2151869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:56.778351 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.788072 2151869 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.805215 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.815496 2151869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:56.825033 2151869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:56.825087 2151869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:56.839732 2151869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:56.851166 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:56.976815 2151869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:57.154020 2151869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:57.154114 2151869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:57.159214 2151869 start.go:563] Will wait 60s for crictl version
	I0819 13:20:57.159282 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.163467 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:57.209305 2151869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:57.209417 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.239241 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.269905 2151869 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:53.844042 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.845906 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:56.135714 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:56.635996 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.135847 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.635970 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.136650 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.636316 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.136506 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.636057 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.136545 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.636030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.297660 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:57.297685 2153523 pod_ready.go:82] duration metric: took 6.009754619s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:57.297696 2153523 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:59.304626 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:57.270865 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:57.273851 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274158 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:57.274186 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274425 2151869 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:57.278843 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:57.296818 2151869 kubeadm.go:883] updating cluster {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:57.296988 2151869 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:57.297042 2151869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:57.345101 2151869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:57.345130 2151869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:57.345185 2151869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.345236 2151869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.345463 2151869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.345553 2151869 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.345586 2151869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.345482 2151869 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 13:20:57.345492 2151869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.345496 2151869 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.347612 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.347625 2151869 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.347661 2151869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.347698 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.347797 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.348298 2151869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.348418 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.348485 2151869 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 13:20:57.521291 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.528953 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.530354 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.538749 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.539434 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.564916 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 13:20:57.572522 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.596418 2151869 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 13:20:57.596483 2151869 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.596536 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.632436 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 13:20:57.690627 2151869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.690666 2151869 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 13:20:57.690725 2151869 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.690677 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690776 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 13:20:57.690827 2151869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.690856 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.711488 2151869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 13:20:57.711546 2151869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.711597 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842702 2151869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 13:20:57.842794 2151869 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 13:20:57.842819 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.842815 2151869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.842845 2151869 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.842917 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842923 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842933 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.842769 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.863031 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.959879 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.959965 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.960047 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.960061 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.960110 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.960298 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.994432 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.109438 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.124785 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:58.124845 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:58.126704 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:58.126791 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:58.126801 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:58.184360 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.198177 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.280314 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 13:20:58.280424 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.280471 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 13:20:58.280579 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:20:58.291181 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 13:20:58.291246 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 13:20:58.291278 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:20:58.291295 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 13:20:58.291333 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:20:58.291370 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:20:58.329433 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 13:20:58.329497 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 13:20:58.329553 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:20:58.329588 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 13:20:58.329597 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:20:58.329602 2151869 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329640 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329660 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 13:20:58.329679 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 13:20:58.329704 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 13:20:58.329747 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 13:20:58.333814 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 13:21:00.287189 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.957515939s)
	I0819 13:21:00.287247 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 13:21:00.287270 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287208 2151869 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.957596318s)
	I0819 13:21:00.287323 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287327 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 13:20:58.346708 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:00.844688 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.845156 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:01.136475 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.136128 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.636458 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.136471 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.636080 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.135958 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.636494 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.136279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.636642 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.803995 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:03.804110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:05.805252 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.037910 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.750558372s)
	I0819 13:21:02.037948 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 13:21:02.037977 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:02.038031 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:04.606941 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.568877079s)
	I0819 13:21:04.606974 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 13:21:04.607003 2151869 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:04.607057 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:05.346854 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.108643 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:06.136247 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:06.636648 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.136212 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.636710 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.136112 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.636655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.136349 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.635778 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.136689 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.635931 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.305329 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:10.805529 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.376612 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.769523227s)
	I0819 13:21:08.376661 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 13:21:08.376690 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:08.376749 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:10.337756 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.960979615s)
	I0819 13:21:10.337805 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 13:21:10.337835 2151869 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.337888 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.982386 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 13:21:10.982434 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.982483 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.345178 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.345444 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:11.135937 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:11.636273 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.136428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.635907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.135893 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.636189 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.136600 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.636483 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.135856 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.635957 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.303597 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:15.804612 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.334483 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351971561s)
	I0819 13:21:12.334524 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 13:21:12.334561 2151869 cache_images.go:123] Successfully loaded all cached images
	I0819 13:21:12.334568 2151869 cache_images.go:92] duration metric: took 14.989422621s to LoadCachedImages
	I0819 13:21:12.334580 2151869 kubeadm.go:934] updating node { 192.168.50.161 8443 v1.31.0 crio true true} ...
	I0819 13:21:12.334746 2151869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-441627 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:21:12.334860 2151869 ssh_runner.go:195] Run: crio config
	I0819 13:21:12.396639 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:12.396662 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:12.396674 2151869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:21:12.396703 2151869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.161 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-441627 NodeName:no-preload-441627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:21:12.396877 2151869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-441627"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:21:12.396958 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:21:12.408527 2151869 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:21:12.408601 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:21:12.418596 2151869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:21:12.435620 2151869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:21:12.452829 2151869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 13:21:12.470756 2151869 ssh_runner.go:195] Run: grep 192.168.50.161	control-plane.minikube.internal$ /etc/hosts
	I0819 13:21:12.474802 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:21:12.486795 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:21:12.612626 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:21:12.639005 2151869 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627 for IP: 192.168.50.161
	I0819 13:21:12.639027 2151869 certs.go:194] generating shared ca certs ...
	I0819 13:21:12.639051 2151869 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:21:12.639231 2151869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:21:12.639296 2151869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:21:12.639310 2151869 certs.go:256] generating profile certs ...
	I0819 13:21:12.639441 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/client.key
	I0819 13:21:12.639535 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key.72b03406
	I0819 13:21:12.639591 2151869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key
	I0819 13:21:12.639752 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:21:12.639811 2151869 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:21:12.639822 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:21:12.639849 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:21:12.639874 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:21:12.639899 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:21:12.639944 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:21:12.640741 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:21:12.674749 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:21:12.700466 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:21:12.726553 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:21:12.753171 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 13:21:12.779864 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:21:12.817715 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:21:12.841844 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:21:12.867122 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:21:12.896767 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:21:12.921671 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:21:12.948478 2151869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:21:12.967603 2151869 ssh_runner.go:195] Run: openssl version
	I0819 13:21:12.973848 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:21:12.984584 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989193 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989248 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.995006 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:21:13.005510 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:21:13.017919 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022858 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022918 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.029040 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:21:13.039654 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:21:13.049968 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054446 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054502 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.060941 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:21:13.071954 2151869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:21:13.076513 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:21:13.082624 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:21:13.088523 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:21:13.094597 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:21:13.100589 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:21:13.106674 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:21:13.112637 2151869 kubeadm.go:392] StartCluster: {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:21:13.112759 2151869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:21:13.112804 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.153143 2151869 cri.go:89] found id: ""
	I0819 13:21:13.153230 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:21:13.164365 2151869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:21:13.164389 2151869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:21:13.164444 2151869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:21:13.174609 2151869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:21:13.175874 2151869 kubeconfig.go:125] found "no-preload-441627" server: "https://192.168.50.161:8443"
	I0819 13:21:13.178228 2151869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:21:13.188495 2151869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.161
	I0819 13:21:13.188536 2151869 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:21:13.188562 2151869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:21:13.188623 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.225547 2151869 cri.go:89] found id: ""
	I0819 13:21:13.225635 2151869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:21:13.242484 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:21:13.251983 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:21:13.252021 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:21:13.252085 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:21:13.260912 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:21:13.260984 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:21:13.270535 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:21:13.279500 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:21:13.279549 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:21:13.288906 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.297861 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:21:13.297914 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.307468 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:21:13.316801 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:21:13.316859 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:21:13.325756 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:21:13.336403 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:13.448374 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.613650 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.165234261s)
	I0819 13:21:14.613681 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.841371 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.904812 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:15.017425 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:21:15.017541 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.518613 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.018036 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.061238 2151869 api_server.go:72] duration metric: took 1.043828546s to wait for apiserver process to appear ...
	I0819 13:21:16.061271 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:21:16.061301 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:14.846330 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:17.344355 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:19.219563 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:21:19.219607 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:21:19.219627 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.350613 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.350647 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:19.562010 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.567205 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.567236 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.062225 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.085272 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:20.085311 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.561434 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.565983 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:21:20.574754 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:21:20.574779 2151869 api_server.go:131] duration metric: took 4.513500352s to wait for apiserver health ...
	I0819 13:21:20.574790 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:20.574797 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:20.576255 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:21:16.136030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.636437 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.135718 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.635876 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.136668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.636021 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.135933 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.636638 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.135892 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.636450 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.804980 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.306777 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.577369 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:21:20.603169 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:21:20.642872 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:21:20.657328 2151869 system_pods.go:59] 8 kube-system pods found
	I0819 13:21:20.657366 2151869 system_pods.go:61] "coredns-6f6b679f8f-4w764" [486b4748-9a70-47b1-b386-e5369ebc2a96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:21:20.657373 2151869 system_pods.go:61] "etcd-no-preload-441627" [8013da47-1ecb-47de-a82c-7b72529b68fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:21:20.657383 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [72190b64-c312-48c0-8494-27e1ad39d877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:21:20.657389 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [60773a38-952f-4f53-a3f3-e54acdf7d196] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:21:20.657395 2151869 system_pods.go:61] "kube-proxy-v76kp" [ac6ad27b-5ee4-4899-8744-82537285a195] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:21:20.657400 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [78dde452-d8d5-498d-a6b1-459e6cb9ac16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:21:20.657405 2151869 system_pods.go:61] "metrics-server-6867b74b74-hd777" [2ce49c7d-af76-459a-82f5-aaa8d2b97c60] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:21:20.657410 2151869 system_pods.go:61] "storage-provisioner" [b67a7c83-46d3-4d75-bfa2-ae944b0c0ab8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:21:20.657417 2151869 system_pods.go:74] duration metric: took 14.518946ms to wait for pod list to return data ...
	I0819 13:21:20.657424 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:21:20.661337 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:21:20.661362 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:21:20.661373 2151869 node_conditions.go:105] duration metric: took 3.944656ms to run NodePressure ...
	I0819 13:21:20.661435 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:20.951411 2151869 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955734 2151869 kubeadm.go:739] kubelet initialised
	I0819 13:21:20.955755 2151869 kubeadm.go:740] duration metric: took 4.317107ms waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955763 2151869 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:21:20.961038 2151869 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.965450 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965473 2151869 pod_ready.go:82] duration metric: took 4.406672ms for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.965481 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965488 2151869 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.969586 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969610 2151869 pod_ready.go:82] duration metric: took 4.112423ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.969620 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969626 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.973542 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973566 2151869 pod_ready.go:82] duration metric: took 3.932252ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.973575 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973583 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.046350 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046390 2151869 pod_ready.go:82] duration metric: took 72.796575ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:21.046405 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046414 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:19.346285 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.346824 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.136149 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:21.635766 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.136148 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.135942 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.636570 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.135982 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.635967 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.136750 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.635977 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:25.636070 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:25.675163 2152780 cri.go:89] found id: ""
	I0819 13:21:25.675196 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.675208 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:25.675216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:25.675294 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:25.711676 2152780 cri.go:89] found id: ""
	I0819 13:21:25.711702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.711711 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:25.711717 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:25.711777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:25.748448 2152780 cri.go:89] found id: ""
	I0819 13:21:25.748479 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.748490 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:25.748499 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:25.748574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:25.786555 2152780 cri.go:89] found id: ""
	I0819 13:21:25.786584 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.786597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:25.786606 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:25.786661 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:22.804004 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:24.804708 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.447200 2151869 pod_ready.go:93] pod "kube-proxy-v76kp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:21.447230 2151869 pod_ready.go:82] duration metric: took 400.803851ms for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.447241 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:23.452136 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.454487 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:23.845593 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:26.345971 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.819620 2152780 cri.go:89] found id: ""
	I0819 13:21:25.819654 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.819665 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:25.819672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:25.819734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:25.853303 2152780 cri.go:89] found id: ""
	I0819 13:21:25.853333 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.853342 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:25.853349 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:25.853401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:25.886285 2152780 cri.go:89] found id: ""
	I0819 13:21:25.886315 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.886325 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:25.886332 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:25.886389 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:25.921320 2152780 cri.go:89] found id: ""
	I0819 13:21:25.921347 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.921356 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:25.921367 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:25.921377 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:25.975400 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:25.975486 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:25.989715 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:25.989741 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:26.115378 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:26.115402 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:26.115415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:26.192370 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:26.192415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:28.739640 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:28.752652 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:28.752729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:28.791883 2152780 cri.go:89] found id: ""
	I0819 13:21:28.791910 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.791918 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:28.791925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:28.791977 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:28.829635 2152780 cri.go:89] found id: ""
	I0819 13:21:28.829668 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.829678 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:28.829686 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:28.829756 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:28.866768 2152780 cri.go:89] found id: ""
	I0819 13:21:28.866804 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.866817 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:28.866825 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:28.866897 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:28.901921 2152780 cri.go:89] found id: ""
	I0819 13:21:28.901954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.901963 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:28.901970 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:28.902025 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:28.945421 2152780 cri.go:89] found id: ""
	I0819 13:21:28.945449 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.945458 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:28.945468 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:28.945535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:28.982857 2152780 cri.go:89] found id: ""
	I0819 13:21:28.982893 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.982905 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:28.982918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:28.982978 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:29.020378 2152780 cri.go:89] found id: ""
	I0819 13:21:29.020415 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.020427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:29.020436 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:29.020502 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:29.055782 2152780 cri.go:89] found id: ""
	I0819 13:21:29.055817 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.055829 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:29.055841 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:29.055864 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:29.107722 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:29.107761 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:29.122304 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:29.122364 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:29.193379 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:29.193415 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:29.193435 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:29.271539 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:29.271578 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:27.304409 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.306462 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:27.952426 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.954179 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:28.346305 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:30.845538 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:31.819829 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:31.832354 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:31.832427 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:31.873620 2152780 cri.go:89] found id: ""
	I0819 13:21:31.873647 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.873656 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:31.873662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:31.873722 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:31.905857 2152780 cri.go:89] found id: ""
	I0819 13:21:31.905891 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.905903 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:31.905911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:31.905993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:31.942187 2152780 cri.go:89] found id: ""
	I0819 13:21:31.942215 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.942227 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:31.942235 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:31.942307 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:31.981439 2152780 cri.go:89] found id: ""
	I0819 13:21:31.981475 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.981486 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:31.981494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:31.981569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:32.019863 2152780 cri.go:89] found id: ""
	I0819 13:21:32.019900 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.019922 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:32.019934 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:32.020000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:32.055007 2152780 cri.go:89] found id: ""
	I0819 13:21:32.055035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.055044 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:32.055050 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:32.055117 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:32.092770 2152780 cri.go:89] found id: ""
	I0819 13:21:32.092809 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.092818 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:32.092824 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:32.092880 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:32.127000 2152780 cri.go:89] found id: ""
	I0819 13:21:32.127035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.127043 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:32.127053 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:32.127067 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:32.179979 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:32.180026 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:32.195377 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:32.195412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:32.269864 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:32.269887 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:32.269902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:32.348035 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:32.348072 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:34.886647 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:34.900410 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:34.900483 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:34.938558 2152780 cri.go:89] found id: ""
	I0819 13:21:34.938588 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.938596 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:34.938603 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:34.938660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:34.975968 2152780 cri.go:89] found id: ""
	I0819 13:21:34.976001 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.976012 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:34.976020 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:34.976105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:35.008967 2152780 cri.go:89] found id: ""
	I0819 13:21:35.008995 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.009003 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:35.009010 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:35.009079 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:35.045918 2152780 cri.go:89] found id: ""
	I0819 13:21:35.045957 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.045966 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:35.045973 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:35.046038 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:35.090605 2152780 cri.go:89] found id: ""
	I0819 13:21:35.090640 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.090652 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:35.090667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:35.090740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:35.152032 2152780 cri.go:89] found id: ""
	I0819 13:21:35.152066 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.152079 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:35.152088 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:35.152182 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:35.197900 2152780 cri.go:89] found id: ""
	I0819 13:21:35.197932 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.197947 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:35.197956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:35.198026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:35.233818 2152780 cri.go:89] found id: ""
	I0819 13:21:35.233853 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.233867 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:35.233884 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:35.233902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:35.316457 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:35.316501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:35.356449 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:35.356480 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:35.409307 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:35.409347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:35.423776 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:35.423807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:35.500868 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:31.803587 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:33.803736 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.805282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:32.452623 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:34.454347 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.454772 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:35.454809 2151869 pod_ready.go:82] duration metric: took 14.007559429s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:35.454826 2151869 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:33.345607 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.347672 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.844185 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:38.001981 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:38.014908 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:38.014991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:38.051110 2152780 cri.go:89] found id: ""
	I0819 13:21:38.051141 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.051154 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:38.051162 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:38.051236 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:38.087476 2152780 cri.go:89] found id: ""
	I0819 13:21:38.087502 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.087511 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:38.087517 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:38.087575 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:38.122929 2152780 cri.go:89] found id: ""
	I0819 13:21:38.122961 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.122972 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:38.122980 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:38.123050 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:38.157352 2152780 cri.go:89] found id: ""
	I0819 13:21:38.157382 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.157394 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:38.157403 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:38.157468 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:38.194914 2152780 cri.go:89] found id: ""
	I0819 13:21:38.194953 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.194968 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:38.194974 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:38.195028 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:38.234036 2152780 cri.go:89] found id: ""
	I0819 13:21:38.234069 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.234078 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:38.234087 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:38.234155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:38.275759 2152780 cri.go:89] found id: ""
	I0819 13:21:38.275787 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.275798 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:38.275806 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:38.275867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:38.312718 2152780 cri.go:89] found id: ""
	I0819 13:21:38.312749 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.312759 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:38.312770 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:38.312785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:38.352032 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:38.352057 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:38.401877 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:38.401912 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:38.415899 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:38.415929 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:38.495576 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:38.495596 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:38.495609 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:38.304885 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.804828 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.461711 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:39.960883 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.343811 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.344371 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:41.073796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:41.095862 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:41.095935 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:41.155760 2152780 cri.go:89] found id: ""
	I0819 13:21:41.155790 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.155820 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:41.155828 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:41.155899 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:41.200108 2152780 cri.go:89] found id: ""
	I0819 13:21:41.200147 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.200177 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:41.200189 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:41.200263 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:41.233462 2152780 cri.go:89] found id: ""
	I0819 13:21:41.233494 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.233506 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:41.233514 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:41.233585 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:41.267089 2152780 cri.go:89] found id: ""
	I0819 13:21:41.267120 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.267133 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:41.267141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:41.267212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:41.302391 2152780 cri.go:89] found id: ""
	I0819 13:21:41.302423 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.302434 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:41.302443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:41.302509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:41.337812 2152780 cri.go:89] found id: ""
	I0819 13:21:41.337843 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.337855 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:41.337864 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:41.337939 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:41.373924 2152780 cri.go:89] found id: ""
	I0819 13:21:41.373954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.373961 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:41.373968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:41.374023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:41.407905 2152780 cri.go:89] found id: ""
	I0819 13:21:41.407950 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.407961 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:41.407974 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:41.407996 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:41.493650 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:41.493693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:41.532411 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:41.532456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:41.584613 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:41.584650 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:41.598337 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:41.598369 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:41.672154 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.172662 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:44.185663 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:44.185730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:44.220825 2152780 cri.go:89] found id: ""
	I0819 13:21:44.220863 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.220876 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:44.220885 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:44.220955 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:44.257261 2152780 cri.go:89] found id: ""
	I0819 13:21:44.257292 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.257300 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:44.257307 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:44.257369 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:44.292652 2152780 cri.go:89] found id: ""
	I0819 13:21:44.292690 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.292701 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:44.292709 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:44.292806 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:44.330064 2152780 cri.go:89] found id: ""
	I0819 13:21:44.330096 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.330108 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:44.330117 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:44.330184 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:44.366863 2152780 cri.go:89] found id: ""
	I0819 13:21:44.366895 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.366906 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:44.366915 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:44.366976 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:44.400335 2152780 cri.go:89] found id: ""
	I0819 13:21:44.400372 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.400385 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:44.400394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:44.400458 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:44.434381 2152780 cri.go:89] found id: ""
	I0819 13:21:44.434411 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.434422 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:44.434430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:44.434498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:44.470985 2152780 cri.go:89] found id: ""
	I0819 13:21:44.471013 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.471024 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:44.471036 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:44.471048 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:44.521552 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:44.521593 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:44.536197 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:44.536230 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:44.613040 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.613063 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:44.613077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:44.694600 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:44.694655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:42.807847 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:45.304916 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.461974 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.464590 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.843926 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.845069 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:47.231770 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:47.245480 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:47.245555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:47.282302 2152780 cri.go:89] found id: ""
	I0819 13:21:47.282334 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.282343 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:47.282351 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:47.282424 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:47.317791 2152780 cri.go:89] found id: ""
	I0819 13:21:47.317828 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.317839 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:47.317847 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:47.317914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:47.355999 2152780 cri.go:89] found id: ""
	I0819 13:21:47.356033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.356044 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:47.356052 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:47.356113 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:47.394652 2152780 cri.go:89] found id: ""
	I0819 13:21:47.394689 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.394702 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:47.394711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:47.394769 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:47.428563 2152780 cri.go:89] found id: ""
	I0819 13:21:47.428599 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.428612 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:47.428621 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:47.428698 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:47.462669 2152780 cri.go:89] found id: ""
	I0819 13:21:47.462702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.462712 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:47.462719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:47.462789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:47.496322 2152780 cri.go:89] found id: ""
	I0819 13:21:47.496349 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.496357 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:47.496366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:47.496419 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:47.527997 2152780 cri.go:89] found id: ""
	I0819 13:21:47.528033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.528045 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:47.528058 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:47.528077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:47.541514 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:47.541542 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:47.612804 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:47.612833 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:47.612851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.692815 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:47.692857 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:47.731057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:47.731092 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.286289 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:50.299376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:50.299457 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:50.335093 2152780 cri.go:89] found id: ""
	I0819 13:21:50.335123 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.335132 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:50.335138 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:50.335212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:50.369353 2152780 cri.go:89] found id: ""
	I0819 13:21:50.369378 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.369386 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:50.369394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:50.369459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:50.404319 2152780 cri.go:89] found id: ""
	I0819 13:21:50.404350 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.404360 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:50.404366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:50.404421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:50.441033 2152780 cri.go:89] found id: ""
	I0819 13:21:50.441064 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.441073 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:50.441080 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:50.441136 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:50.477438 2152780 cri.go:89] found id: ""
	I0819 13:21:50.477463 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.477471 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:50.477485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:50.477541 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:50.511785 2152780 cri.go:89] found id: ""
	I0819 13:21:50.511825 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.511838 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:50.511848 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:50.511925 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:50.545454 2152780 cri.go:89] found id: ""
	I0819 13:21:50.545487 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.545496 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:50.545502 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:50.545563 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:50.582022 2152780 cri.go:89] found id: ""
	I0819 13:21:50.582053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.582061 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:50.582071 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:50.582086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:50.625353 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:50.625384 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.688814 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:50.688851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:50.704225 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:50.704254 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:50.782736 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:50.782771 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:50.782789 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.804084 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.804223 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.961189 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:48.962466 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.344808 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.347615 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.365075 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:53.380049 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:53.380135 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:53.418911 2152780 cri.go:89] found id: ""
	I0819 13:21:53.418944 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.418956 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:53.418964 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:53.419020 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:53.458952 2152780 cri.go:89] found id: ""
	I0819 13:21:53.458986 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.458997 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:53.459005 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:53.459080 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:53.497219 2152780 cri.go:89] found id: ""
	I0819 13:21:53.497254 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.497265 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:53.497273 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:53.497360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:53.533624 2152780 cri.go:89] found id: ""
	I0819 13:21:53.533658 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.533671 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:53.533682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:53.533761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:53.567453 2152780 cri.go:89] found id: ""
	I0819 13:21:53.567490 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.567501 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:53.567509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:53.567574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:53.603438 2152780 cri.go:89] found id: ""
	I0819 13:21:53.603472 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.603482 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:53.603490 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:53.603570 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:53.637608 2152780 cri.go:89] found id: ""
	I0819 13:21:53.637648 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.637659 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:53.637667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:53.637742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:53.684476 2152780 cri.go:89] found id: ""
	I0819 13:21:53.684514 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.684524 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:53.684534 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:53.684550 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:53.699195 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:53.699227 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:53.766980 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:53.767000 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:53.767013 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:53.844890 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:53.844924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:53.888284 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:53.888318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:51.805011 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.805710 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.460867 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.462350 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.962100 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.845091 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.845602 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:57.848944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:56.442668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:56.455936 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:56.456019 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:56.494363 2152780 cri.go:89] found id: ""
	I0819 13:21:56.494403 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.494414 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:56.494423 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:56.494498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:56.531901 2152780 cri.go:89] found id: ""
	I0819 13:21:56.531933 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.531943 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:56.531950 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:56.532046 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:56.569677 2152780 cri.go:89] found id: ""
	I0819 13:21:56.569706 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.569715 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:56.569721 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:56.569787 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:56.608109 2152780 cri.go:89] found id: ""
	I0819 13:21:56.608136 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.608147 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:56.608153 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:56.608240 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:56.646753 2152780 cri.go:89] found id: ""
	I0819 13:21:56.646791 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.646803 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:56.646810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:56.646879 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:56.686029 2152780 cri.go:89] found id: ""
	I0819 13:21:56.686053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.686061 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:56.686067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:56.686131 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:56.724838 2152780 cri.go:89] found id: ""
	I0819 13:21:56.724874 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.724886 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:56.724895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:56.724963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:56.762368 2152780 cri.go:89] found id: ""
	I0819 13:21:56.762399 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.762407 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:56.762422 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:56.762434 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:56.815047 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:56.815103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:56.830783 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:56.830809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:56.908963 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:56.908990 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:56.909008 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:56.991337 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:56.991372 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:59.532005 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:59.545733 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:59.545823 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:59.584082 2152780 cri.go:89] found id: ""
	I0819 13:21:59.584111 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.584121 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:59.584130 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:59.584216 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:59.618041 2152780 cri.go:89] found id: ""
	I0819 13:21:59.618075 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.618086 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:59.618094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:59.618171 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:59.651720 2152780 cri.go:89] found id: ""
	I0819 13:21:59.651753 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.651764 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:59.651772 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:59.651844 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:59.686447 2152780 cri.go:89] found id: ""
	I0819 13:21:59.686471 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.686479 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:59.686486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:59.686539 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:59.719583 2152780 cri.go:89] found id: ""
	I0819 13:21:59.719623 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.719635 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:59.719643 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:59.719714 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:59.753559 2152780 cri.go:89] found id: ""
	I0819 13:21:59.753593 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.753604 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:59.753613 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:59.753684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:59.789518 2152780 cri.go:89] found id: ""
	I0819 13:21:59.789554 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.789566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:59.789575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:59.789647 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:59.825772 2152780 cri.go:89] found id: ""
	I0819 13:21:59.825801 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.825812 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:59.825823 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:59.825837 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:59.878857 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:59.878899 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:59.894113 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:59.894146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:59.975274 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:59.975301 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:59.975318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:00.056121 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:00.056185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:56.304106 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.804232 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.805308 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.461819 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.961828 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.345176 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.345756 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.596366 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:02.610365 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:02.610439 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:02.646703 2152780 cri.go:89] found id: ""
	I0819 13:22:02.646729 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.646737 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:02.646743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:02.646800 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:02.682311 2152780 cri.go:89] found id: ""
	I0819 13:22:02.682345 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.682357 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:02.682364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:02.682441 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:02.718494 2152780 cri.go:89] found id: ""
	I0819 13:22:02.718525 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.718545 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:02.718551 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:02.718608 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.751952 2152780 cri.go:89] found id: ""
	I0819 13:22:02.751988 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.751999 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:02.752008 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:02.752074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:02.786156 2152780 cri.go:89] found id: ""
	I0819 13:22:02.786189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.786197 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:02.786204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:02.786261 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:02.823080 2152780 cri.go:89] found id: ""
	I0819 13:22:02.823114 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.823126 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:02.823134 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:02.823207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:02.860266 2152780 cri.go:89] found id: ""
	I0819 13:22:02.860299 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.860311 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:02.860320 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:02.860377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:02.903612 2152780 cri.go:89] found id: ""
	I0819 13:22:02.903643 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.903652 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:02.903667 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:02.903680 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:02.984771 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:02.984813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:03.027346 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:03.027378 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:03.077562 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:03.077608 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:03.092135 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:03.092190 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:03.168418 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:05.669468 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:05.682640 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:05.682706 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:05.717755 2152780 cri.go:89] found id: ""
	I0819 13:22:05.717785 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.717796 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:05.717805 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:05.717867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:05.757165 2152780 cri.go:89] found id: ""
	I0819 13:22:05.757191 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.757199 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:05.757204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:05.757265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:05.794991 2152780 cri.go:89] found id: ""
	I0819 13:22:05.795023 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.795035 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:05.795044 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:05.795124 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.805919 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.304915 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.963297 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.459786 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:04.844594 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.344680 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.829634 2152780 cri.go:89] found id: ""
	I0819 13:22:05.829664 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.829673 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:05.829680 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:05.829734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:05.865739 2152780 cri.go:89] found id: ""
	I0819 13:22:05.865767 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.865775 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:05.865781 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:05.865839 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:05.901395 2152780 cri.go:89] found id: ""
	I0819 13:22:05.901425 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.901435 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:05.901443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:05.901641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:05.937162 2152780 cri.go:89] found id: ""
	I0819 13:22:05.937190 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.937200 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:05.937207 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:05.937284 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:05.974302 2152780 cri.go:89] found id: ""
	I0819 13:22:05.974332 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.974342 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:05.974353 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:05.974367 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:06.012635 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:06.012664 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:06.064093 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:06.064129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:06.077792 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:06.077825 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:06.148954 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:06.148980 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:06.148999 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:08.731539 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:08.747311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:08.747414 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:08.782881 2152780 cri.go:89] found id: ""
	I0819 13:22:08.782913 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.782921 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:08.782927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:08.782990 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:08.817533 2152780 cri.go:89] found id: ""
	I0819 13:22:08.817569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.817580 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:08.817587 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:08.817657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:08.855614 2152780 cri.go:89] found id: ""
	I0819 13:22:08.855650 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.855664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:08.855672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:08.855742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:08.896405 2152780 cri.go:89] found id: ""
	I0819 13:22:08.896439 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.896447 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:08.896455 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:08.896525 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:08.936250 2152780 cri.go:89] found id: ""
	I0819 13:22:08.936279 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.936288 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:08.936296 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:08.936366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:08.976325 2152780 cri.go:89] found id: ""
	I0819 13:22:08.976357 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.976367 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:08.976376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:08.976446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:09.012921 2152780 cri.go:89] found id: ""
	I0819 13:22:09.012953 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.012962 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:09.012967 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:09.013026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:09.051385 2152780 cri.go:89] found id: ""
	I0819 13:22:09.051420 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.051432 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:09.051444 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:09.051461 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:09.105101 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:09.105146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:09.120289 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:09.120331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:09.216182 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:09.216220 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:09.216238 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:09.302832 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:09.302876 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:07.307100 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.803522 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.461843 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.961972 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.345855 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.845869 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.848734 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:11.861754 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:11.861838 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:11.897908 2152780 cri.go:89] found id: ""
	I0819 13:22:11.897940 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.897949 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:11.897956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:11.898013 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:11.935276 2152780 cri.go:89] found id: ""
	I0819 13:22:11.935309 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.935318 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:11.935326 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:11.935386 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:11.978577 2152780 cri.go:89] found id: ""
	I0819 13:22:11.978608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.978619 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:11.978626 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:11.978697 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:12.018464 2152780 cri.go:89] found id: ""
	I0819 13:22:12.018496 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.018505 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:12.018511 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:12.018567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:12.062371 2152780 cri.go:89] found id: ""
	I0819 13:22:12.062402 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.062412 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:12.062418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:12.062476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:12.102988 2152780 cri.go:89] found id: ""
	I0819 13:22:12.103017 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.103026 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:12.103033 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:12.103088 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:12.143553 2152780 cri.go:89] found id: ""
	I0819 13:22:12.143581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.143591 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:12.143599 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:12.143660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:12.187155 2152780 cri.go:89] found id: ""
	I0819 13:22:12.187192 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.187211 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:12.187224 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:12.187251 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:12.236028 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:12.236069 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:12.250047 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:12.250084 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:12.320586 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:12.320612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:12.320624 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:12.400274 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:12.400320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:14.939735 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:14.953283 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:14.953348 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:14.993818 2152780 cri.go:89] found id: ""
	I0819 13:22:14.993851 2152780 logs.go:276] 0 containers: []
	W0819 13:22:14.993862 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:14.993870 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:14.993940 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:15.035966 2152780 cri.go:89] found id: ""
	I0819 13:22:15.035997 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.036005 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:15.036012 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:15.036073 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:15.070730 2152780 cri.go:89] found id: ""
	I0819 13:22:15.070765 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.070777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:15.070785 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:15.070855 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:15.111830 2152780 cri.go:89] found id: ""
	I0819 13:22:15.111859 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.111868 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:15.111874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:15.111928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:15.144013 2152780 cri.go:89] found id: ""
	I0819 13:22:15.144039 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.144047 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:15.144053 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:15.144107 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:15.178050 2152780 cri.go:89] found id: ""
	I0819 13:22:15.178074 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.178083 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:15.178090 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:15.178145 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:15.212622 2152780 cri.go:89] found id: ""
	I0819 13:22:15.212647 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.212656 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:15.212662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:15.212716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:15.244744 2152780 cri.go:89] found id: ""
	I0819 13:22:15.244772 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.244780 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:15.244789 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:15.244804 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:15.284232 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:15.284266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:15.336738 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:15.336774 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:15.350801 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:15.350835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:15.426424 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:15.426453 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:15.426469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:11.809160 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.303978 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.962746 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:13.964030 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.344847 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.344951 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.009087 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:18.022428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:18.022500 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:18.062900 2152780 cri.go:89] found id: ""
	I0819 13:22:18.062930 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.062941 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:18.062949 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:18.063021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:18.096711 2152780 cri.go:89] found id: ""
	I0819 13:22:18.096752 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.096764 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:18.096773 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:18.096846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:18.129199 2152780 cri.go:89] found id: ""
	I0819 13:22:18.129226 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.129246 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:18.129253 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:18.129323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:18.161907 2152780 cri.go:89] found id: ""
	I0819 13:22:18.161944 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.161955 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:18.161963 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:18.162036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:18.198514 2152780 cri.go:89] found id: ""
	I0819 13:22:18.198557 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.198570 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:18.198577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:18.198648 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:18.230114 2152780 cri.go:89] found id: ""
	I0819 13:22:18.230142 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.230152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:18.230160 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:18.230221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:18.265610 2152780 cri.go:89] found id: ""
	I0819 13:22:18.265642 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.265654 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:18.265660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:18.265729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:18.298889 2152780 cri.go:89] found id: ""
	I0819 13:22:18.298929 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.298941 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:18.298952 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:18.298967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:18.381124 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:18.381165 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:18.422361 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:18.422394 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:18.482182 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:18.482215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:18.496054 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:18.496086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:18.568459 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:16.305471 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.804592 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.461760 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.961000 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:20.961568 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.844928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.345851 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.069389 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:21.082568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:21.082636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:21.117124 2152780 cri.go:89] found id: ""
	I0819 13:22:21.117158 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.117171 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:21.117180 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:21.117272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:21.151954 2152780 cri.go:89] found id: ""
	I0819 13:22:21.151981 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.151990 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:21.151996 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:21.152047 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:21.187411 2152780 cri.go:89] found id: ""
	I0819 13:22:21.187443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.187452 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:21.187461 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:21.187528 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:21.224118 2152780 cri.go:89] found id: ""
	I0819 13:22:21.224148 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.224156 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:21.224173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:21.224247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:21.261153 2152780 cri.go:89] found id: ""
	I0819 13:22:21.261189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.261198 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:21.261205 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:21.261274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:21.305404 2152780 cri.go:89] found id: ""
	I0819 13:22:21.305433 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.305444 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:21.305453 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:21.305534 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:21.358428 2152780 cri.go:89] found id: ""
	I0819 13:22:21.358462 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.358474 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:21.358482 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:21.358561 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:21.410971 2152780 cri.go:89] found id: ""
	I0819 13:22:21.411005 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.411017 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:21.411029 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:21.411045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:21.462929 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:21.462963 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:21.476419 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:21.476446 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:21.544095 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.544116 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:21.544130 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:21.622802 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:21.622841 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.171988 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:24.185545 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:24.185624 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:24.221604 2152780 cri.go:89] found id: ""
	I0819 13:22:24.221632 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.221640 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:24.221646 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:24.221703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:24.255866 2152780 cri.go:89] found id: ""
	I0819 13:22:24.255903 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.255914 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:24.255923 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:24.255993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:24.290338 2152780 cri.go:89] found id: ""
	I0819 13:22:24.290381 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.290393 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:24.290401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:24.290465 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:24.325653 2152780 cri.go:89] found id: ""
	I0819 13:22:24.325685 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.325693 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:24.325700 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:24.325753 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:24.364081 2152780 cri.go:89] found id: ""
	I0819 13:22:24.364116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.364128 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:24.364136 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:24.364211 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:24.403464 2152780 cri.go:89] found id: ""
	I0819 13:22:24.403504 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.403516 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:24.403524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:24.403597 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:24.441205 2152780 cri.go:89] found id: ""
	I0819 13:22:24.441244 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.441256 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:24.441264 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:24.441340 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:24.478294 2152780 cri.go:89] found id: ""
	I0819 13:22:24.478324 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.478332 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:24.478341 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:24.478354 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:24.562969 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:24.563012 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.609296 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:24.609332 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:24.665537 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:24.665579 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:24.680086 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:24.680128 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:24.743465 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.305466 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.804623 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.461141 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.461784 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.844225 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.844485 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.844552 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.244137 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:27.257619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:27.257695 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:27.290229 2152780 cri.go:89] found id: ""
	I0819 13:22:27.290265 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.290274 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:27.290280 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:27.290354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:27.327185 2152780 cri.go:89] found id: ""
	I0819 13:22:27.327216 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.327225 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:27.327231 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:27.327302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:27.362949 2152780 cri.go:89] found id: ""
	I0819 13:22:27.362978 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.362999 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:27.363007 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:27.363074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:27.404085 2152780 cri.go:89] found id: ""
	I0819 13:22:27.404116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.404125 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:27.404132 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:27.404237 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:27.439884 2152780 cri.go:89] found id: ""
	I0819 13:22:27.439922 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.439933 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:27.439942 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:27.440006 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:27.473933 2152780 cri.go:89] found id: ""
	I0819 13:22:27.473961 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.473970 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:27.473976 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:27.474033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:27.510093 2152780 cri.go:89] found id: ""
	I0819 13:22:27.510121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.510131 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:27.510137 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:27.510188 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:27.549178 2152780 cri.go:89] found id: ""
	I0819 13:22:27.549205 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.549213 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:27.549223 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:27.549236 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:27.605886 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:27.605925 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:27.619506 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:27.619537 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:27.686821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:27.686849 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:27.686867 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:27.761275 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:27.761315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:30.301382 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:30.315140 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:30.315221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:30.352687 2152780 cri.go:89] found id: ""
	I0819 13:22:30.352710 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.352717 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:30.352723 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:30.352775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:30.388577 2152780 cri.go:89] found id: ""
	I0819 13:22:30.388604 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.388612 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:30.388618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:30.388671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:30.423937 2152780 cri.go:89] found id: ""
	I0819 13:22:30.423967 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.423977 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:30.423983 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:30.424039 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:30.456789 2152780 cri.go:89] found id: ""
	I0819 13:22:30.456823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.456834 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:30.456852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:30.456928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:30.495741 2152780 cri.go:89] found id: ""
	I0819 13:22:30.495778 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.495792 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:30.495801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:30.495856 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:30.532919 2152780 cri.go:89] found id: ""
	I0819 13:22:30.532947 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.532959 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:30.532972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:30.533036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:30.574556 2152780 cri.go:89] found id: ""
	I0819 13:22:30.574591 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.574602 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:30.574615 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:30.574685 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:30.610248 2152780 cri.go:89] found id: ""
	I0819 13:22:30.610280 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.610291 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:30.610303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:30.610319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:30.664302 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:30.664356 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:30.678529 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:30.678571 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:30.753412 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:30.753441 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:30.753460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:26.304403 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:28.305270 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.804282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.462817 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:29.463222 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.344268 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:32.344384 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.831754 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:30.831795 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:33.375305 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:33.390064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:33.390139 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:33.426250 2152780 cri.go:89] found id: ""
	I0819 13:22:33.426281 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.426292 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:33.426301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:33.426356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:33.464229 2152780 cri.go:89] found id: ""
	I0819 13:22:33.464255 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.464263 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:33.464270 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:33.464329 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:33.501098 2152780 cri.go:89] found id: ""
	I0819 13:22:33.501134 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.501145 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:33.501154 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:33.501223 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:33.535617 2152780 cri.go:89] found id: ""
	I0819 13:22:33.535651 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.535662 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:33.535670 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:33.535738 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:33.570553 2152780 cri.go:89] found id: ""
	I0819 13:22:33.570600 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.570613 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:33.570622 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:33.570694 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:33.605414 2152780 cri.go:89] found id: ""
	I0819 13:22:33.605454 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.605466 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:33.605477 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:33.605545 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:33.637564 2152780 cri.go:89] found id: ""
	I0819 13:22:33.637599 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.637610 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:33.637618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:33.637681 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:33.671403 2152780 cri.go:89] found id: ""
	I0819 13:22:33.671438 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.671450 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:33.671463 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:33.671482 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:33.724121 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:33.724159 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:33.737041 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:33.737070 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:33.808305 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:33.808329 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:33.808342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:33.886089 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:33.886136 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:32.806000 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:35.303503 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:31.463294 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:33.962798 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:34.344950 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.845547 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.424035 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:36.439329 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:36.439407 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:36.486348 2152780 cri.go:89] found id: ""
	I0819 13:22:36.486375 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.486383 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:36.486391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:36.486455 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:36.522796 2152780 cri.go:89] found id: ""
	I0819 13:22:36.522823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.522834 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:36.522843 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:36.522909 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:36.556528 2152780 cri.go:89] found id: ""
	I0819 13:22:36.556561 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.556573 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:36.556580 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:36.556652 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:36.592545 2152780 cri.go:89] found id: ""
	I0819 13:22:36.592581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.592591 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:36.592600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:36.592671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:36.628668 2152780 cri.go:89] found id: ""
	I0819 13:22:36.628696 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.628705 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:36.628711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:36.628765 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:36.660783 2152780 cri.go:89] found id: ""
	I0819 13:22:36.660817 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.660825 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:36.660832 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:36.660886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:36.694853 2152780 cri.go:89] found id: ""
	I0819 13:22:36.694883 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.694893 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:36.694901 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:36.694969 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:36.727767 2152780 cri.go:89] found id: ""
	I0819 13:22:36.727797 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.727805 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:36.727816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:36.727830 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:36.741032 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:36.741064 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:36.816015 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:36.816045 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:36.816062 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:36.896272 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:36.896319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:36.933895 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:36.933936 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:39.495808 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:39.511071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:39.511158 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:39.551412 2152780 cri.go:89] found id: ""
	I0819 13:22:39.551447 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.551458 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:39.551467 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:39.551523 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:39.588796 2152780 cri.go:89] found id: ""
	I0819 13:22:39.588827 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.588836 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:39.588842 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:39.588904 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:39.627115 2152780 cri.go:89] found id: ""
	I0819 13:22:39.627144 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.627153 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:39.627159 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:39.627222 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:39.662372 2152780 cri.go:89] found id: ""
	I0819 13:22:39.662405 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.662413 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:39.662420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:39.662478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:39.696239 2152780 cri.go:89] found id: ""
	I0819 13:22:39.696272 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.696282 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:39.696298 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:39.696363 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:39.731334 2152780 cri.go:89] found id: ""
	I0819 13:22:39.731372 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.731383 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:39.731391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:39.731454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:39.764368 2152780 cri.go:89] found id: ""
	I0819 13:22:39.764400 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.764412 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:39.764420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:39.764488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:39.798994 2152780 cri.go:89] found id: ""
	I0819 13:22:39.799026 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.799038 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:39.799050 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:39.799066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:39.812935 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:39.812965 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:39.882867 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:39.882898 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:39.882913 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:39.968067 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:39.968102 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:40.007122 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:40.007158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:37.304584 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.305417 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.462377 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:38.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.344771 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.355221 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:42.560811 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:42.574968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:42.575034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:42.609476 2152780 cri.go:89] found id: ""
	I0819 13:22:42.609516 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.609529 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:42.609537 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:42.609607 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:42.650763 2152780 cri.go:89] found id: ""
	I0819 13:22:42.650794 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.650804 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:42.650812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:42.650885 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:42.686330 2152780 cri.go:89] found id: ""
	I0819 13:22:42.686358 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.686369 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:42.686377 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:42.686446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:42.720937 2152780 cri.go:89] found id: ""
	I0819 13:22:42.720963 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.720971 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:42.720982 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:42.721036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:42.755079 2152780 cri.go:89] found id: ""
	I0819 13:22:42.755122 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.755132 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:42.755141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:42.755212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:42.787680 2152780 cri.go:89] found id: ""
	I0819 13:22:42.787715 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.787727 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:42.787738 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:42.787807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:42.824768 2152780 cri.go:89] found id: ""
	I0819 13:22:42.824799 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.824807 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:42.824813 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:42.824874 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:42.859902 2152780 cri.go:89] found id: ""
	I0819 13:22:42.859935 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.859946 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:42.859957 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:42.859972 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:42.911169 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:42.911215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:42.925108 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:42.925140 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:43.002506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:43.002557 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:43.002574 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:43.084408 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:43.084451 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:45.625360 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:45.638948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:45.639033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:45.675838 2152780 cri.go:89] found id: ""
	I0819 13:22:45.675874 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.675887 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:45.675895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:45.675971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:45.708726 2152780 cri.go:89] found id: ""
	I0819 13:22:45.708757 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.708768 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:45.708776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:45.708842 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:45.749704 2152780 cri.go:89] found id: ""
	I0819 13:22:45.749739 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.749752 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:45.749769 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:45.749835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:45.786426 2152780 cri.go:89] found id: ""
	I0819 13:22:45.786457 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.786469 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:45.786478 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:45.786547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:41.803829 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.804465 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.805120 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.462336 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.961582 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.845489 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.845943 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.822363 2152780 cri.go:89] found id: ""
	I0819 13:22:45.822395 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.822403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:45.822408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:45.822472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:45.856308 2152780 cri.go:89] found id: ""
	I0819 13:22:45.856337 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.856346 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:45.856353 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:45.856421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:45.893379 2152780 cri.go:89] found id: ""
	I0819 13:22:45.893417 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.893427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:45.893435 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:45.893503 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:45.929841 2152780 cri.go:89] found id: ""
	I0819 13:22:45.929868 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.929879 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:45.929890 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:45.929943 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:45.980392 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:45.980432 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:45.994781 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:45.994809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:46.065633 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:46.065658 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:46.065675 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:46.140938 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:46.140976 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.680055 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:48.692972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:48.693053 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:48.736198 2152780 cri.go:89] found id: ""
	I0819 13:22:48.736232 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.736244 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:48.736252 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:48.736332 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:48.775819 2152780 cri.go:89] found id: ""
	I0819 13:22:48.775855 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.775872 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:48.775881 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:48.775956 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:48.814090 2152780 cri.go:89] found id: ""
	I0819 13:22:48.814121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.814130 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:48.814139 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:48.814207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:48.849630 2152780 cri.go:89] found id: ""
	I0819 13:22:48.849656 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.849665 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:48.849671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:48.849730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:48.887945 2152780 cri.go:89] found id: ""
	I0819 13:22:48.887982 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.887995 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:48.888003 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:48.888072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:48.924105 2152780 cri.go:89] found id: ""
	I0819 13:22:48.924145 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.924158 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:48.924184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:48.924266 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:48.960481 2152780 cri.go:89] found id: ""
	I0819 13:22:48.960521 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.960532 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:48.960540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:48.960611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:48.998826 2152780 cri.go:89] found id: ""
	I0819 13:22:48.998856 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.998865 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:48.998874 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:48.998888 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:49.053064 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:49.053103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:49.067115 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:49.067154 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:49.146731 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:49.146753 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:49.146767 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:49.232228 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:49.232268 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.303393 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.305013 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:46.461905 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.961570 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.961743 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.344068 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.844551 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:51.775692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:51.789766 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:51.789848 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:51.830997 2152780 cri.go:89] found id: ""
	I0819 13:22:51.831029 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.831038 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:51.831045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:51.831105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:51.870122 2152780 cri.go:89] found id: ""
	I0819 13:22:51.870150 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.870160 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:51.870168 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:51.870247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:51.911170 2152780 cri.go:89] found id: ""
	I0819 13:22:51.911198 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.911206 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:51.911216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:51.911280 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:51.946159 2152780 cri.go:89] found id: ""
	I0819 13:22:51.946197 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.946218 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:51.946225 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:51.946300 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:51.981131 2152780 cri.go:89] found id: ""
	I0819 13:22:51.981160 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.981170 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:51.981176 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:51.981251 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:52.024417 2152780 cri.go:89] found id: ""
	I0819 13:22:52.024456 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.024477 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:52.024486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:52.024567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:52.062519 2152780 cri.go:89] found id: ""
	I0819 13:22:52.062552 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.062563 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:52.062571 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:52.062640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:52.106259 2152780 cri.go:89] found id: ""
	I0819 13:22:52.106291 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.106302 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:52.106314 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:52.106338 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:52.158681 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:52.158723 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:52.172489 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:52.172523 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:52.236862 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:52.236892 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:52.236910 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:52.319287 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:52.319331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:54.864061 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:54.878753 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:54.878834 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:54.913195 2152780 cri.go:89] found id: ""
	I0819 13:22:54.913225 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.913240 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:54.913247 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:54.913314 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:54.954867 2152780 cri.go:89] found id: ""
	I0819 13:22:54.954909 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.954922 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:54.954930 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:54.954995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:54.992570 2152780 cri.go:89] found id: ""
	I0819 13:22:54.992601 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.992610 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:54.992616 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:54.992683 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:55.027056 2152780 cri.go:89] found id: ""
	I0819 13:22:55.027090 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.027101 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:55.027108 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:55.027190 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:55.061941 2152780 cri.go:89] found id: ""
	I0819 13:22:55.061973 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.061985 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:55.061994 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:55.062061 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:55.100414 2152780 cri.go:89] found id: ""
	I0819 13:22:55.100443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.100455 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:55.100465 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:55.100535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:55.143797 2152780 cri.go:89] found id: ""
	I0819 13:22:55.143836 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.143848 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:55.143856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:55.143929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:55.187540 2152780 cri.go:89] found id: ""
	I0819 13:22:55.187569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.187577 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:55.187587 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:55.187600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:55.241075 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:55.241110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:55.254813 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:55.254846 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:55.325934 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:55.325955 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:55.325974 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:55.404725 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:55.404775 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:52.804423 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.303884 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:52.963783 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.461517 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:53.345242 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.346054 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.843535 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.949501 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:57.964664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:57.964730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:58.005114 2152780 cri.go:89] found id: ""
	I0819 13:22:58.005153 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.005164 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:58.005173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:58.005274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:58.038596 2152780 cri.go:89] found id: ""
	I0819 13:22:58.038629 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.038639 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:58.038644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:58.038700 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:58.081597 2152780 cri.go:89] found id: ""
	I0819 13:22:58.081631 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.081640 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:58.081650 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:58.081735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:58.120356 2152780 cri.go:89] found id: ""
	I0819 13:22:58.120389 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.120400 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:58.120408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:58.120478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:58.156313 2152780 cri.go:89] found id: ""
	I0819 13:22:58.156346 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.156358 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:58.156366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:58.156442 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:58.192198 2152780 cri.go:89] found id: ""
	I0819 13:22:58.192238 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.192249 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:58.192257 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:58.192335 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:58.226574 2152780 cri.go:89] found id: ""
	I0819 13:22:58.226608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.226622 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:58.226631 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:58.226705 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:58.260060 2152780 cri.go:89] found id: ""
	I0819 13:22:58.260088 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.260096 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:58.260106 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:58.260119 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:58.338242 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:58.338289 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:58.382245 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:58.382286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:58.436273 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:58.436314 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:58.452143 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:58.452215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:58.530133 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:57.305258 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.804142 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.462332 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.962480 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.844922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:01.030732 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:01.044083 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:01.044155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:01.083082 2152780 cri.go:89] found id: ""
	I0819 13:23:01.083114 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.083122 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:01.083129 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:01.083197 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:01.122645 2152780 cri.go:89] found id: ""
	I0819 13:23:01.122680 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.122700 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:01.122708 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:01.122777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:01.163144 2152780 cri.go:89] found id: ""
	I0819 13:23:01.163175 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.163187 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:01.163194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:01.163262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:01.201545 2152780 cri.go:89] found id: ""
	I0819 13:23:01.201587 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.201597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:01.201605 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:01.201693 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:01.237047 2152780 cri.go:89] found id: ""
	I0819 13:23:01.237075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.237086 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:01.237094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:01.237162 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:01.276531 2152780 cri.go:89] found id: ""
	I0819 13:23:01.276560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.276571 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:01.276577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:01.276633 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:01.315639 2152780 cri.go:89] found id: ""
	I0819 13:23:01.315668 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.315676 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:01.315682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:01.315734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:01.351942 2152780 cri.go:89] found id: ""
	I0819 13:23:01.351969 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.351980 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:01.351990 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:01.352006 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:01.403068 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:01.403111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:01.416608 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:01.416639 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:01.493450 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:01.493483 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:01.493501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:01.577709 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:01.577750 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:04.146865 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:04.160366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:04.160431 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:04.194463 2152780 cri.go:89] found id: ""
	I0819 13:23:04.194494 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.194504 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:04.194510 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:04.194567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:04.228327 2152780 cri.go:89] found id: ""
	I0819 13:23:04.228351 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.228359 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:04.228364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:04.228413 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:04.261613 2152780 cri.go:89] found id: ""
	I0819 13:23:04.261645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.261656 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:04.261664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:04.261740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:04.294583 2152780 cri.go:89] found id: ""
	I0819 13:23:04.294617 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.294628 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:04.294636 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:04.294712 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:04.328537 2152780 cri.go:89] found id: ""
	I0819 13:23:04.328566 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.328576 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:04.328585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:04.328657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:04.361865 2152780 cri.go:89] found id: ""
	I0819 13:23:04.361906 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.361917 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:04.361925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:04.361988 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:04.396308 2152780 cri.go:89] found id: ""
	I0819 13:23:04.396343 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.396354 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:04.396361 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:04.396433 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:04.429551 2152780 cri.go:89] found id: ""
	I0819 13:23:04.429580 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.429591 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:04.429602 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:04.429618 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:04.479557 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:04.479600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:04.493746 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:04.493781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:04.563309 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:04.563333 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:04.563347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:04.644413 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:04.644455 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:02.305047 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.804564 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.460986 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.461977 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.845028 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.345778 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.195678 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:07.209255 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:07.209323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:07.245267 2152780 cri.go:89] found id: ""
	I0819 13:23:07.245295 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.245306 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:07.245324 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:07.245394 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:07.283190 2152780 cri.go:89] found id: ""
	I0819 13:23:07.283223 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.283234 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:07.283242 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:07.283312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:07.330025 2152780 cri.go:89] found id: ""
	I0819 13:23:07.330053 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.330065 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:07.330072 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:07.330137 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:07.381649 2152780 cri.go:89] found id: ""
	I0819 13:23:07.381675 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.381684 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:07.381690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:07.381744 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:07.436759 2152780 cri.go:89] found id: ""
	I0819 13:23:07.436794 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.436806 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:07.436814 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:07.436890 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:07.475843 2152780 cri.go:89] found id: ""
	I0819 13:23:07.475876 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.475885 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:07.475891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:07.475954 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:07.510531 2152780 cri.go:89] found id: ""
	I0819 13:23:07.510560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.510569 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:07.510575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:07.510641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:07.546041 2152780 cri.go:89] found id: ""
	I0819 13:23:07.546075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.546087 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:07.546100 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:07.546118 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:07.559102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:07.559137 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:07.631002 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:07.631030 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:07.631045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:07.705711 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:07.705751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:07.745114 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:07.745148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.302515 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:10.316028 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:10.316108 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:10.357431 2152780 cri.go:89] found id: ""
	I0819 13:23:10.357468 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.357477 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:10.357483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:10.357546 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:10.393541 2152780 cri.go:89] found id: ""
	I0819 13:23:10.393568 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.393577 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:10.393583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:10.393651 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:10.429651 2152780 cri.go:89] found id: ""
	I0819 13:23:10.429684 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.429692 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:10.429698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:10.429752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:10.469626 2152780 cri.go:89] found id: ""
	I0819 13:23:10.469690 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.469701 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:10.469706 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:10.469761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:10.503015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.503048 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.503058 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:10.503065 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:10.503133 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:10.538127 2152780 cri.go:89] found id: ""
	I0819 13:23:10.538168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.538180 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:10.538188 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:10.538260 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:10.573015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.573050 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.573058 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:10.573064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:10.573134 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:10.608094 2152780 cri.go:89] found id: ""
	I0819 13:23:10.608125 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.608136 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:10.608150 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:10.608185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:10.688926 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:10.688967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:10.724554 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:10.724588 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.777619 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:10.777652 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:10.791106 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:10.791143 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:23:07.305198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.804759 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:06.961224 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.460095 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.843726 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.845072 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	W0819 13:23:10.865964 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.366960 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:13.381903 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:13.381973 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:13.420589 2152780 cri.go:89] found id: ""
	I0819 13:23:13.420620 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.420630 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:13.420637 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:13.420724 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:13.457809 2152780 cri.go:89] found id: ""
	I0819 13:23:13.457840 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.457850 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:13.457863 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:13.457927 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:13.496310 2152780 cri.go:89] found id: ""
	I0819 13:23:13.496347 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.496359 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:13.496368 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:13.496436 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:13.532057 2152780 cri.go:89] found id: ""
	I0819 13:23:13.532089 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.532097 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:13.532103 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:13.532183 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:13.575780 2152780 cri.go:89] found id: ""
	I0819 13:23:13.575814 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.575823 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:13.575829 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:13.575896 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:13.611484 2152780 cri.go:89] found id: ""
	I0819 13:23:13.611510 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.611520 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:13.611528 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:13.611594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:13.647456 2152780 cri.go:89] found id: ""
	I0819 13:23:13.647483 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.647491 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:13.647497 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:13.647569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:13.681181 2152780 cri.go:89] found id: ""
	I0819 13:23:13.681212 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.681224 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:13.681237 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:13.681266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:13.694680 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:13.694709 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:13.767546 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.767576 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:13.767597 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:13.851609 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:13.851649 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:13.893739 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:13.893778 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:12.304391 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.304840 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.962978 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.461299 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.344830 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.844720 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.446686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:16.459850 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:16.459928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:16.498671 2152780 cri.go:89] found id: ""
	I0819 13:23:16.498703 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.498712 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:16.498718 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:16.498775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:16.534655 2152780 cri.go:89] found id: ""
	I0819 13:23:16.534691 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.534704 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:16.534716 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:16.534781 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:16.571279 2152780 cri.go:89] found id: ""
	I0819 13:23:16.571317 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.571330 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:16.571337 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:16.571402 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:16.604843 2152780 cri.go:89] found id: ""
	I0819 13:23:16.604878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.604889 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:16.604897 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:16.604963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:16.641150 2152780 cri.go:89] found id: ""
	I0819 13:23:16.641181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.641190 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:16.641201 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:16.641272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:16.676737 2152780 cri.go:89] found id: ""
	I0819 13:23:16.676763 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.676773 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:16.676782 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:16.676847 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:16.709876 2152780 cri.go:89] found id: ""
	I0819 13:23:16.709908 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.709920 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:16.709928 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:16.709991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:16.745249 2152780 cri.go:89] found id: ""
	I0819 13:23:16.745277 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.745293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:16.745304 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:16.745320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:16.796364 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:16.796400 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:16.811859 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:16.811887 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:16.886118 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:16.886144 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:16.886158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.968304 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:16.968349 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.511448 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:19.523952 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:19.524023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:19.556661 2152780 cri.go:89] found id: ""
	I0819 13:23:19.556696 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.556708 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:19.556715 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:19.556784 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:19.595956 2152780 cri.go:89] found id: ""
	I0819 13:23:19.595986 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.595995 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:19.596001 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:19.596055 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:19.631993 2152780 cri.go:89] found id: ""
	I0819 13:23:19.632021 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.632030 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:19.632038 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:19.632126 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:19.667871 2152780 cri.go:89] found id: ""
	I0819 13:23:19.667924 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.667935 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:19.667944 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:19.668007 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:19.703585 2152780 cri.go:89] found id: ""
	I0819 13:23:19.703615 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.703624 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:19.703630 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:19.703684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:19.737220 2152780 cri.go:89] found id: ""
	I0819 13:23:19.737252 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.737263 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:19.737272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:19.737343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:19.775526 2152780 cri.go:89] found id: ""
	I0819 13:23:19.775570 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.775587 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:19.775595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:19.775671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:19.810611 2152780 cri.go:89] found id: ""
	I0819 13:23:19.810645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.810656 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:19.810667 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:19.810682 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.855138 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:19.855167 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:19.906988 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:19.907025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:19.920451 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:19.920481 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:19.989054 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:19.989086 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:19.989101 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.803686 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.804090 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.804422 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.463889 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.961085 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.961248 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.845050 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:21.344704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.571624 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:22.585585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:22.585667 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:22.620456 2152780 cri.go:89] found id: ""
	I0819 13:23:22.620488 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.620497 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:22.620509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:22.620562 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:22.658971 2152780 cri.go:89] found id: ""
	I0819 13:23:22.659005 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.659017 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:22.659024 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:22.659094 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:22.696236 2152780 cri.go:89] found id: ""
	I0819 13:23:22.696269 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.696281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:22.696290 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:22.696354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:22.735260 2152780 cri.go:89] found id: ""
	I0819 13:23:22.735296 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.735306 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:22.735313 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:22.735365 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.770593 2152780 cri.go:89] found id: ""
	I0819 13:23:22.770625 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.770637 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:22.770644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:22.770716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:22.806348 2152780 cri.go:89] found id: ""
	I0819 13:23:22.806375 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.806400 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:22.806418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:22.806489 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:22.843812 2152780 cri.go:89] found id: ""
	I0819 13:23:22.843841 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.843850 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:22.843858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:22.843929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:22.880607 2152780 cri.go:89] found id: ""
	I0819 13:23:22.880641 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.880655 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:22.880669 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:22.880695 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:22.930816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:22.930855 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:22.946795 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:22.946829 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:23.021065 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:23.021094 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:23.021110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:23.098083 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:23.098129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:25.641104 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:25.654226 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:25.654297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:25.689403 2152780 cri.go:89] found id: ""
	I0819 13:23:25.689429 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.689437 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:25.689445 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:25.689496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:25.725340 2152780 cri.go:89] found id: ""
	I0819 13:23:25.725380 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.725393 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:25.725401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:25.725472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:25.759133 2152780 cri.go:89] found id: ""
	I0819 13:23:25.759168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.759178 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:25.759184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:25.759253 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:25.793605 2152780 cri.go:89] found id: ""
	I0819 13:23:25.793636 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.793649 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:25.793657 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:25.793731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.804810 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.305119 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.460663 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:23.344790 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.344975 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.843772 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.828355 2152780 cri.go:89] found id: ""
	I0819 13:23:25.828391 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.828403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:25.828411 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:25.828488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:25.869621 2152780 cri.go:89] found id: ""
	I0819 13:23:25.869658 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.869669 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:25.869676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:25.869731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:25.907749 2152780 cri.go:89] found id: ""
	I0819 13:23:25.907786 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.907795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:25.907801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:25.907870 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:25.942443 2152780 cri.go:89] found id: ""
	I0819 13:23:25.942472 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.942484 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:25.942497 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:25.942515 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:26.016016 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:26.016044 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:26.016058 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:26.097427 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:26.097475 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:26.140275 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:26.140320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:26.189858 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:26.189894 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:28.705027 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:28.717999 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:28.718071 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:28.752233 2152780 cri.go:89] found id: ""
	I0819 13:23:28.752263 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.752279 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:28.752286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:28.752343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:28.786033 2152780 cri.go:89] found id: ""
	I0819 13:23:28.786071 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.786080 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:28.786086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:28.786143 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:28.821105 2152780 cri.go:89] found id: ""
	I0819 13:23:28.821135 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.821165 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:28.821173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:28.821243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:28.859179 2152780 cri.go:89] found id: ""
	I0819 13:23:28.859226 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.859237 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:28.859244 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:28.859319 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:28.894213 2152780 cri.go:89] found id: ""
	I0819 13:23:28.894250 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.894263 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:28.894272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:28.894351 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:28.927807 2152780 cri.go:89] found id: ""
	I0819 13:23:28.927836 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.927848 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:28.927856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:28.927924 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:28.965430 2152780 cri.go:89] found id: ""
	I0819 13:23:28.965464 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.965476 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:28.965485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:28.965555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:29.008894 2152780 cri.go:89] found id: ""
	I0819 13:23:29.008920 2152780 logs.go:276] 0 containers: []
	W0819 13:23:29.008928 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:29.008938 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:29.008951 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:29.089617 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:29.089644 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:29.089657 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:29.173271 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:29.173310 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:29.216057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:29.216098 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:29.267076 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:29.267131 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:27.804872 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:30.304909 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.461788 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.961548 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.844049 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.846007 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.782574 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:31.797583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:31.797655 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:31.836889 2152780 cri.go:89] found id: ""
	I0819 13:23:31.836926 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.836939 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:31.836948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:31.837021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:31.875693 2152780 cri.go:89] found id: ""
	I0819 13:23:31.875725 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.875734 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:31.875740 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:31.875813 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:31.909847 2152780 cri.go:89] found id: ""
	I0819 13:23:31.909878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.909887 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:31.909895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:31.909971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:31.944122 2152780 cri.go:89] found id: ""
	I0819 13:23:31.944155 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.944184 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:31.944193 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:31.944265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:31.979342 2152780 cri.go:89] found id: ""
	I0819 13:23:31.979373 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.979383 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:31.979390 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:31.979461 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:32.013001 2152780 cri.go:89] found id: ""
	I0819 13:23:32.013049 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.013060 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:32.013067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:32.013148 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:32.047975 2152780 cri.go:89] found id: ""
	I0819 13:23:32.048003 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.048012 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:32.048017 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:32.048074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:32.082151 2152780 cri.go:89] found id: ""
	I0819 13:23:32.082181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.082192 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:32.082205 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:32.082241 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:32.120254 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:32.120286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.171594 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:32.171634 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:32.185654 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:32.185687 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:32.251542 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:32.251568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:32.251586 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:34.831302 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:34.846902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:34.846980 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:34.883697 2152780 cri.go:89] found id: ""
	I0819 13:23:34.883731 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.883747 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:34.883756 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:34.883821 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:34.918184 2152780 cri.go:89] found id: ""
	I0819 13:23:34.918220 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.918232 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:34.918240 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:34.918321 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:34.951621 2152780 cri.go:89] found id: ""
	I0819 13:23:34.951653 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.951664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:34.951673 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:34.951735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:34.987330 2152780 cri.go:89] found id: ""
	I0819 13:23:34.987362 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.987373 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:34.987381 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:34.987454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:35.022480 2152780 cri.go:89] found id: ""
	I0819 13:23:35.022518 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.022531 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:35.022539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:35.022606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:35.059717 2152780 cri.go:89] found id: ""
	I0819 13:23:35.059753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.059769 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:35.059776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:35.059835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:35.098877 2152780 cri.go:89] found id: ""
	I0819 13:23:35.098912 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.098924 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:35.098932 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:35.098995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:35.137392 2152780 cri.go:89] found id: ""
	I0819 13:23:35.137422 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.137429 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:35.137438 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:35.137456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:35.151134 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:35.151177 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:35.216821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:35.216852 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:35.216870 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:35.300662 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:35.300717 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:35.344627 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:35.344661 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.803063 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.804202 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.962287 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.460360 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.344876 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.345734 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:37.899152 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:37.913091 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:37.913161 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:37.949684 2152780 cri.go:89] found id: ""
	I0819 13:23:37.949713 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.949722 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:37.949728 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:37.949791 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:37.984480 2152780 cri.go:89] found id: ""
	I0819 13:23:37.984509 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.984518 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:37.984524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:37.984591 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:38.022228 2152780 cri.go:89] found id: ""
	I0819 13:23:38.022264 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.022277 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:38.022285 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:38.022360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:38.056117 2152780 cri.go:89] found id: ""
	I0819 13:23:38.056147 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.056159 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:38.056184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:38.056252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:38.089157 2152780 cri.go:89] found id: ""
	I0819 13:23:38.089187 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.089204 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:38.089212 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:38.089279 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:38.123335 2152780 cri.go:89] found id: ""
	I0819 13:23:38.123369 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.123378 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:38.123384 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:38.123444 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:38.159555 2152780 cri.go:89] found id: ""
	I0819 13:23:38.159584 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.159596 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:38.159604 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:38.159671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:38.200728 2152780 cri.go:89] found id: ""
	I0819 13:23:38.200765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.200777 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:38.200789 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:38.200809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:38.251652 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:38.251684 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:38.264834 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:38.264862 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:38.341259 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:38.341287 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:38.341304 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:38.417381 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:38.417423 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:36.805360 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:39.303731 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.461458 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.962012 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.347922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.845195 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.959227 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:40.972845 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:40.972937 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:41.009180 2152780 cri.go:89] found id: ""
	I0819 13:23:41.009219 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.009233 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:41.009241 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:41.009313 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:41.044055 2152780 cri.go:89] found id: ""
	I0819 13:23:41.044088 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.044100 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:41.044109 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:41.044203 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:41.081073 2152780 cri.go:89] found id: ""
	I0819 13:23:41.081098 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.081108 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:41.081114 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:41.081166 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:41.113973 2152780 cri.go:89] found id: ""
	I0819 13:23:41.114011 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.114024 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:41.114031 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:41.114105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:41.149480 2152780 cri.go:89] found id: ""
	I0819 13:23:41.149515 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.149527 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:41.149539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:41.149632 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:41.183756 2152780 cri.go:89] found id: ""
	I0819 13:23:41.183788 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.183800 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:41.183808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:41.183884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:41.217629 2152780 cri.go:89] found id: ""
	I0819 13:23:41.217670 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.217683 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:41.217691 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:41.217755 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:41.254948 2152780 cri.go:89] found id: ""
	I0819 13:23:41.254987 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.255000 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:41.255011 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:41.255025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:41.293080 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:41.293116 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.349391 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:41.349437 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:41.363720 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:41.363751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:41.459273 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:41.459299 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:41.459315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.043239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:44.057981 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:44.058072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:44.091216 2152780 cri.go:89] found id: ""
	I0819 13:23:44.091247 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.091256 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:44.091263 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:44.091336 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:44.128729 2152780 cri.go:89] found id: ""
	I0819 13:23:44.128761 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.128772 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:44.128779 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:44.128865 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:44.167730 2152780 cri.go:89] found id: ""
	I0819 13:23:44.167765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.167777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:44.167784 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:44.167852 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:44.202323 2152780 cri.go:89] found id: ""
	I0819 13:23:44.202355 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.202366 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:44.202373 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:44.202451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:44.237030 2152780 cri.go:89] found id: ""
	I0819 13:23:44.237074 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.237088 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:44.237096 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:44.237164 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:44.273319 2152780 cri.go:89] found id: ""
	I0819 13:23:44.273356 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.273370 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:44.273378 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:44.273459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:44.313267 2152780 cri.go:89] found id: ""
	I0819 13:23:44.313300 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.313313 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:44.313322 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:44.313401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:44.353073 2152780 cri.go:89] found id: ""
	I0819 13:23:44.353099 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.353107 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:44.353115 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:44.353129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:44.366102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:44.366147 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:44.438580 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:44.438612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:44.438628 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.518200 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:44.518250 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:44.559362 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:44.559396 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.308823 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.804769 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:41.461933 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.961713 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.344606 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:45.344727 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.345821 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.114604 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:47.128989 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:47.129067 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:47.174286 2152780 cri.go:89] found id: ""
	I0819 13:23:47.174316 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.174324 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:47.174331 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:47.174391 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:47.208244 2152780 cri.go:89] found id: ""
	I0819 13:23:47.208282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.208291 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:47.208297 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:47.208360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:47.245609 2152780 cri.go:89] found id: ""
	I0819 13:23:47.245638 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.245645 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:47.245651 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:47.245703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:47.280617 2152780 cri.go:89] found id: ""
	I0819 13:23:47.280646 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.280655 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:47.280661 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:47.280726 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:47.317662 2152780 cri.go:89] found id: ""
	I0819 13:23:47.317695 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.317708 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:47.317719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:47.317789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:47.359113 2152780 cri.go:89] found id: ""
	I0819 13:23:47.359143 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.359152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:47.359158 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:47.359243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:47.396520 2152780 cri.go:89] found id: ""
	I0819 13:23:47.396550 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.396561 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:47.396568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:47.396636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:47.433043 2152780 cri.go:89] found id: ""
	I0819 13:23:47.433075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.433086 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:47.433097 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:47.433111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:47.491984 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:47.492023 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:47.505775 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:47.505805 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:47.573541 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:47.573568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:47.573584 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:47.657102 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:47.657148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.201855 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:50.216186 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:50.216250 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:50.254505 2152780 cri.go:89] found id: ""
	I0819 13:23:50.254542 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.254554 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:50.254561 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:50.254627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:50.288971 2152780 cri.go:89] found id: ""
	I0819 13:23:50.289006 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.289015 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:50.289021 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:50.289086 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:50.325243 2152780 cri.go:89] found id: ""
	I0819 13:23:50.325282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.325293 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:50.325302 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:50.325377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:50.361776 2152780 cri.go:89] found id: ""
	I0819 13:23:50.361810 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.361822 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:50.361830 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:50.361892 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:50.395380 2152780 cri.go:89] found id: ""
	I0819 13:23:50.395412 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.395422 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:50.395428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:50.395496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:50.429299 2152780 cri.go:89] found id: ""
	I0819 13:23:50.429336 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.429348 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:50.429356 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:50.429423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:50.471214 2152780 cri.go:89] found id: ""
	I0819 13:23:50.471236 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.471243 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:50.471249 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:50.471302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:50.510258 2152780 cri.go:89] found id: ""
	I0819 13:23:50.510298 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.510309 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:50.510322 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:50.510342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.547398 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:50.547431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:50.598425 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:50.598466 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:50.613308 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:50.613341 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:50.685803 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:50.685834 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:50.685851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:46.303155 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.305240 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.804319 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:46.461162 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.961356 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.961557 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:49.844368 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.345173 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:53.264007 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:53.277505 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:53.277596 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:53.320407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.320436 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.320444 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:53.320450 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:53.320508 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:53.364578 2152780 cri.go:89] found id: ""
	I0819 13:23:53.364628 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.364650 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:53.364660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:53.364732 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:53.404407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.404437 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.404446 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:53.404452 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:53.404522 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:53.440743 2152780 cri.go:89] found id: ""
	I0819 13:23:53.440775 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.440784 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:53.440791 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:53.440858 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:53.476151 2152780 cri.go:89] found id: ""
	I0819 13:23:53.476200 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.476211 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:53.476217 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:53.476297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:53.512531 2152780 cri.go:89] found id: ""
	I0819 13:23:53.512559 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.512567 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:53.512573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:53.512640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:53.562487 2152780 cri.go:89] found id: ""
	I0819 13:23:53.562524 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.562535 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:53.562543 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:53.562617 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:53.627507 2152780 cri.go:89] found id: ""
	I0819 13:23:53.627545 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.627556 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:53.627566 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:53.627587 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:53.686224 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:53.686270 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:53.700659 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:53.700693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:53.776506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:53.776544 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:53.776564 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:53.855375 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:53.855412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:53.305180 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.803826 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.962161 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:54.345531 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.845217 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.398105 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:56.411245 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:56.411312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:56.444097 2152780 cri.go:89] found id: ""
	I0819 13:23:56.444129 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.444138 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:56.444144 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:56.444214 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:56.480958 2152780 cri.go:89] found id: ""
	I0819 13:23:56.480993 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.481004 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:56.481013 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:56.481077 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:56.514657 2152780 cri.go:89] found id: ""
	I0819 13:23:56.514687 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.514696 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:56.514707 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:56.514762 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:56.548615 2152780 cri.go:89] found id: ""
	I0819 13:23:56.548647 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.548659 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:56.548667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:56.548735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:56.585801 2152780 cri.go:89] found id: ""
	I0819 13:23:56.585833 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.585844 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:56.585852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:56.585916 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:56.617804 2152780 cri.go:89] found id: ""
	I0819 13:23:56.617832 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.617843 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:56.617851 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:56.617919 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:56.655726 2152780 cri.go:89] found id: ""
	I0819 13:23:56.655753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.655761 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:56.655768 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:56.655820 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:56.691057 2152780 cri.go:89] found id: ""
	I0819 13:23:56.691087 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.691099 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:56.691111 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:56.691127 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:56.758836 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:56.758860 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:56.758879 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:56.836388 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:56.836427 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:56.875753 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:56.875781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:56.928126 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:56.928184 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.442721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:59.456581 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:59.456742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:59.492452 2152780 cri.go:89] found id: ""
	I0819 13:23:59.492480 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.492488 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:59.492494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:59.492544 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:59.527258 2152780 cri.go:89] found id: ""
	I0819 13:23:59.527291 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.527301 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:59.527311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:59.527366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:59.562712 2152780 cri.go:89] found id: ""
	I0819 13:23:59.562744 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.562757 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:59.562765 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:59.562836 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:59.597522 2152780 cri.go:89] found id: ""
	I0819 13:23:59.597557 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.597568 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:59.597577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:59.597649 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:59.638820 2152780 cri.go:89] found id: ""
	I0819 13:23:59.638852 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.638865 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:59.638873 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:59.638941 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:59.682623 2152780 cri.go:89] found id: ""
	I0819 13:23:59.682656 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.682667 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:59.682676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:59.682746 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:59.718731 2152780 cri.go:89] found id: ""
	I0819 13:23:59.718782 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.718795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:59.718808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:59.718881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:59.754799 2152780 cri.go:89] found id: ""
	I0819 13:23:59.754830 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.754840 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:59.754866 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:59.754890 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:59.807650 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:59.807683 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.821699 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:59.821727 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:59.901940 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:59.901962 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:59.901975 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:59.979019 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:59.979055 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:57.804198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:00.303807 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:57.960616 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.961190 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.344403 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:01.344800 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.519279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:02.532541 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:02.532606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:02.566287 2152780 cri.go:89] found id: ""
	I0819 13:24:02.566327 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.566338 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:02.566347 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:02.566423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:02.599874 2152780 cri.go:89] found id: ""
	I0819 13:24:02.599907 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.599920 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:02.599927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:02.600000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:02.633609 2152780 cri.go:89] found id: ""
	I0819 13:24:02.633642 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.633662 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:02.633671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:02.633741 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:02.669105 2152780 cri.go:89] found id: ""
	I0819 13:24:02.669133 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.669142 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:02.669151 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:02.669245 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:02.704719 2152780 cri.go:89] found id: ""
	I0819 13:24:02.704756 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.704769 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:02.704777 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:02.704846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:02.744694 2152780 cri.go:89] found id: ""
	I0819 13:24:02.744726 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.744736 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:02.744743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:02.744818 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.779911 2152780 cri.go:89] found id: ""
	I0819 13:24:02.779945 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.779959 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:02.779969 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:02.780034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:02.815235 2152780 cri.go:89] found id: ""
	I0819 13:24:02.815277 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.815293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:02.815303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:02.815319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:02.864822 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:02.864852 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:02.879626 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:02.879655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:02.951475 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:02.951504 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:02.951522 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:03.031154 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:03.031196 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.573907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:05.586737 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:05.586798 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:05.623912 2152780 cri.go:89] found id: ""
	I0819 13:24:05.623946 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.623957 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:05.623965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:05.624037 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:05.659399 2152780 cri.go:89] found id: ""
	I0819 13:24:05.659427 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.659435 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:05.659441 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:05.659494 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:05.693173 2152780 cri.go:89] found id: ""
	I0819 13:24:05.693207 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.693219 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:05.693228 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:05.693289 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:05.725641 2152780 cri.go:89] found id: ""
	I0819 13:24:05.725670 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.725680 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:05.725688 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:05.725752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:05.759559 2152780 cri.go:89] found id: ""
	I0819 13:24:05.759593 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.759606 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:05.759614 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:05.759686 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:05.793022 2152780 cri.go:89] found id: ""
	I0819 13:24:05.793051 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.793063 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:05.793071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:05.793147 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.304309 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.304491 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.461249 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.962079 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:03.344928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.845172 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.828997 2152780 cri.go:89] found id: ""
	I0819 13:24:05.829027 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.829037 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:05.829045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:05.829111 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:05.862852 2152780 cri.go:89] found id: ""
	I0819 13:24:05.862885 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.862895 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:05.862908 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:05.862924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.901295 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:05.901334 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:05.956106 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:05.956146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:05.971498 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:05.971525 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:06.040630 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:06.040653 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:06.040670 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:08.617393 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:08.631902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:08.631971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:08.669456 2152780 cri.go:89] found id: ""
	I0819 13:24:08.669512 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.669526 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:08.669539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:08.669630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:08.704825 2152780 cri.go:89] found id: ""
	I0819 13:24:08.704857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.704866 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:08.704874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:08.704933 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:08.738841 2152780 cri.go:89] found id: ""
	I0819 13:24:08.738879 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.738893 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:08.738902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:08.738961 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:08.775865 2152780 cri.go:89] found id: ""
	I0819 13:24:08.775899 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.775909 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:08.775918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:08.775981 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:08.813452 2152780 cri.go:89] found id: ""
	I0819 13:24:08.813482 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.813493 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:08.813501 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:08.813572 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:08.850638 2152780 cri.go:89] found id: ""
	I0819 13:24:08.850669 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.850681 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:08.850690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:08.850758 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:08.886750 2152780 cri.go:89] found id: ""
	I0819 13:24:08.886789 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.886801 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:08.886812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:08.886881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:08.922112 2152780 cri.go:89] found id: ""
	I0819 13:24:08.922145 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.922154 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:08.922164 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:08.922181 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:08.997014 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:08.997042 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:08.997059 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:09.074338 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:09.074388 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:09.114430 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:09.114460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:09.169581 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:09.169620 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:06.804761 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.809429 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:07.465846 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:09.961611 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.343646 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:10.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:12.844008 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.685453 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:11.698438 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:11.698509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:11.730814 2152780 cri.go:89] found id: ""
	I0819 13:24:11.730843 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.730852 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:11.730858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:11.730914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:11.768426 2152780 cri.go:89] found id: ""
	I0819 13:24:11.768460 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.768473 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:11.768483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:11.768555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:11.802718 2152780 cri.go:89] found id: ""
	I0819 13:24:11.802753 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.802765 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:11.802774 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:11.802841 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:11.838016 2152780 cri.go:89] found id: ""
	I0819 13:24:11.838058 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.838066 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:11.838073 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:11.838129 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:11.875869 2152780 cri.go:89] found id: ""
	I0819 13:24:11.875896 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.875904 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:11.875911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:11.875967 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:11.911538 2152780 cri.go:89] found id: ""
	I0819 13:24:11.911572 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.911580 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:11.911586 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:11.911640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:11.947762 2152780 cri.go:89] found id: ""
	I0819 13:24:11.947793 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.947803 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:11.947810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:11.947884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:11.989671 2152780 cri.go:89] found id: ""
	I0819 13:24:11.989706 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.989715 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:11.989726 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:11.989744 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:12.054707 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:12.054733 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:12.054749 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:12.137525 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:12.137572 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:12.175806 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:12.175834 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:12.227606 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:12.227651 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:14.743968 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:14.757178 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:14.757252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:14.791379 2152780 cri.go:89] found id: ""
	I0819 13:24:14.791409 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.791421 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:14.791430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:14.791510 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:14.826221 2152780 cri.go:89] found id: ""
	I0819 13:24:14.826250 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.826259 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:14.826265 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:14.826327 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:14.870239 2152780 cri.go:89] found id: ""
	I0819 13:24:14.870269 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.870281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:14.870345 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:14.870450 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:14.907473 2152780 cri.go:89] found id: ""
	I0819 13:24:14.907505 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.907515 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:14.907524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:14.907594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:14.942547 2152780 cri.go:89] found id: ""
	I0819 13:24:14.942585 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.942594 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:14.942600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:14.942658 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:14.982507 2152780 cri.go:89] found id: ""
	I0819 13:24:14.982536 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.982547 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:14.982555 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:14.982627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:15.020572 2152780 cri.go:89] found id: ""
	I0819 13:24:15.020604 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.020612 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:15.020619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:15.020691 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:15.057076 2152780 cri.go:89] found id: ""
	I0819 13:24:15.057105 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.057114 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:15.057123 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:15.057138 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:15.109111 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:15.109149 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:15.124049 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:15.124076 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:15.187413 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:15.187444 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:15.187460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:15.271009 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:15.271049 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:11.303722 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.804077 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:15.804497 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.961757 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.961949 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:14.844944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.344565 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.810205 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:17.823344 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:17.823417 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:17.862704 2152780 cri.go:89] found id: ""
	I0819 13:24:17.862734 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.862743 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:17.862750 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:17.862807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:17.900371 2152780 cri.go:89] found id: ""
	I0819 13:24:17.900404 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.900414 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:17.900420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:17.900476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:17.935518 2152780 cri.go:89] found id: ""
	I0819 13:24:17.935549 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.935559 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:17.935566 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:17.935622 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:17.972592 2152780 cri.go:89] found id: ""
	I0819 13:24:17.972624 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.972637 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:17.972647 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:17.972717 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:18.008636 2152780 cri.go:89] found id: ""
	I0819 13:24:18.008675 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.008686 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:18.008698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:18.008775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:18.040823 2152780 cri.go:89] found id: ""
	I0819 13:24:18.040857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.040866 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:18.040872 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:18.040934 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:18.079443 2152780 cri.go:89] found id: ""
	I0819 13:24:18.079472 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.079481 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:18.079488 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:18.079547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:18.118717 2152780 cri.go:89] found id: ""
	I0819 13:24:18.118746 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.118756 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:18.118766 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:18.118780 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:18.172588 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:18.172631 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:18.186427 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:18.186457 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:18.258534 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:18.258567 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:18.258585 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:18.335882 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:18.335922 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:17.804682 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.805544 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:16.461777 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:18.462005 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:20.960925 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.345299 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:21.843704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.844383 2152365 pod_ready.go:82] duration metric: took 4m0.006413479s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:22.844420 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 13:24:22.844431 2152365 pod_ready.go:39] duration metric: took 4m3.697261125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:22.844451 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:24:22.844496 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:22.844605 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:22.894620 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:22.894650 2152365 cri.go:89] found id: ""
	I0819 13:24:22.894661 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:22.894731 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.899300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:22.899365 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:22.934260 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:22.934285 2152365 cri.go:89] found id: ""
	I0819 13:24:22.934320 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:22.934380 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.939865 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:22.939927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:22.982433 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:22.982464 2152365 cri.go:89] found id: ""
	I0819 13:24:22.982475 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:22.982550 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.987038 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:22.987119 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:23.025231 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.025264 2152365 cri.go:89] found id: ""
	I0819 13:24:23.025275 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:23.025349 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.030567 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:23.030633 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:23.073663 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.073687 2152365 cri.go:89] found id: ""
	I0819 13:24:23.073695 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:23.073753 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.078563 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:23.078646 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:23.114597 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.114621 2152365 cri.go:89] found id: ""
	I0819 13:24:23.114629 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:23.114688 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.119022 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:23.119091 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:23.161263 2152365 cri.go:89] found id: ""
	I0819 13:24:23.161302 2152365 logs.go:276] 0 containers: []
	W0819 13:24:23.161311 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:23.161317 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:23.161373 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:23.211140 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.211167 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:23.211173 2152365 cri.go:89] found id: ""
	I0819 13:24:23.211182 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:23.211271 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.215875 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.220021 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:23.220046 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.283965 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:23.284004 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:20.878687 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:20.891965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:20.892042 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:20.926029 2152780 cri.go:89] found id: ""
	I0819 13:24:20.926057 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.926070 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:20.926077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:20.926142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:20.961025 2152780 cri.go:89] found id: ""
	I0819 13:24:20.961056 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.961067 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:20.961077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:20.961142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:20.993767 2152780 cri.go:89] found id: ""
	I0819 13:24:20.993800 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.993813 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:20.993821 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:20.993886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:21.025250 2152780 cri.go:89] found id: ""
	I0819 13:24:21.025285 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.025294 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:21.025301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:21.025356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:21.057832 2152780 cri.go:89] found id: ""
	I0819 13:24:21.057863 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.057871 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:21.057877 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:21.057930 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:21.092372 2152780 cri.go:89] found id: ""
	I0819 13:24:21.092400 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.092408 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:21.092419 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:21.092486 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:21.126138 2152780 cri.go:89] found id: ""
	I0819 13:24:21.126177 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.126187 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:21.126194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:21.126262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:21.157779 2152780 cri.go:89] found id: ""
	I0819 13:24:21.157806 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.157814 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:21.157823 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:21.157835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:21.240776 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:21.240813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:21.278705 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:21.278742 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:21.331086 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:21.331125 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:21.346399 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:21.346431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:21.424929 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:23.925325 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:23.938286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:23.938380 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:23.974038 2152780 cri.go:89] found id: ""
	I0819 13:24:23.974070 2152780 logs.go:276] 0 containers: []
	W0819 13:24:23.974079 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:23.974086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:23.974144 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:24.007828 2152780 cri.go:89] found id: ""
	I0819 13:24:24.007865 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.007876 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:24.007883 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:24.007951 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:24.042152 2152780 cri.go:89] found id: ""
	I0819 13:24:24.042182 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.042192 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:24.042198 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:24.042252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:24.076334 2152780 cri.go:89] found id: ""
	I0819 13:24:24.076366 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.076376 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:24.076382 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:24.076451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:24.116333 2152780 cri.go:89] found id: ""
	I0819 13:24:24.116368 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.116380 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:24.116388 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:24.116449 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:24.157126 2152780 cri.go:89] found id: ""
	I0819 13:24:24.157154 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.157163 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:24.157170 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:24.157224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:24.197124 2152780 cri.go:89] found id: ""
	I0819 13:24:24.197153 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.197161 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:24.197167 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:24.197224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:24.232662 2152780 cri.go:89] found id: ""
	I0819 13:24:24.232690 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.232698 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:24.232709 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:24.232721 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.310535 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.310573 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.351436 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:24.351469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:24.409022 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.409061 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.423392 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.423426 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:24.499638 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:22.305024 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:24.305228 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.961886 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:25.460596 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.000428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:27.016949 2152780 kubeadm.go:597] duration metric: took 4m2.883005064s to restartPrimaryControlPlane
	W0819 13:24:27.017029 2152780 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:27.017066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:27.478402 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:27.494351 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:24:27.505001 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:24:27.515444 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:24:27.515467 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:24:27.515523 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:24:27.524646 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:24:27.524697 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:24:27.535343 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:24:27.544402 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:24:27.544456 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:24:27.554655 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.564636 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:24:27.564709 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.575078 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:24:27.584856 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:24:27.584924 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:24:27.595099 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:24:27.675533 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:24:27.675638 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:24:27.827613 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:24:27.827770 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:24:27.827924 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:24:28.032127 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:24:23.323805 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:23.323834 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:23.400053 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:23.400107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:23.438155 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:23.438196 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.483737 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:23.483772 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:23.536000 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:23.536042 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.574211 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:23.574258 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.620141 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:23.620203 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.130515 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.130556 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.185924 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.185959 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.202306 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.202348 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:24.342635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:24.342675 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.895433 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:26.911884 2152365 api_server.go:72] duration metric: took 4m15.522363036s to wait for apiserver process to appear ...
	I0819 13:24:26.911914 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:24:26.911959 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:26.912026 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:26.949386 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.949423 2152365 cri.go:89] found id: ""
	I0819 13:24:26.949436 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:26.949509 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.954854 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:26.954937 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:26.993702 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:26.993732 2152365 cri.go:89] found id: ""
	I0819 13:24:26.993746 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:26.993806 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.997800 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:26.997873 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:27.044488 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:27.044521 2152365 cri.go:89] found id: ""
	I0819 13:24:27.044532 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:27.044612 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.049411 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:27.049488 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:27.098304 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.098328 2152365 cri.go:89] found id: ""
	I0819 13:24:27.098336 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:27.098388 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.102393 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:27.102448 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:27.138927 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.138952 2152365 cri.go:89] found id: ""
	I0819 13:24:27.138960 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:27.139019 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.143058 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:27.143113 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:27.183816 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:27.183846 2152365 cri.go:89] found id: ""
	I0819 13:24:27.183856 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:27.183918 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.188818 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:27.188880 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:27.231255 2152365 cri.go:89] found id: ""
	I0819 13:24:27.231286 2152365 logs.go:276] 0 containers: []
	W0819 13:24:27.231294 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:27.231300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:27.231352 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:27.265165 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.265189 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:27.265194 2152365 cri.go:89] found id: ""
	I0819 13:24:27.265204 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:27.265267 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.269294 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.273771 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:27.273796 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.315407 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:27.315446 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.359309 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:27.359343 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:27.817261 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:27.817316 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:27.834635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:27.834676 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:27.888137 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:27.888194 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:27.940367 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:27.940402 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.981404 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:27.981448 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:28.026666 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:28.026702 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:28.084387 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:28.084419 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:28.160615 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:28.160667 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:28.276547 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:28.276584 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:28.033928 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:24:28.034048 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:24:28.034129 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:24:28.034243 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:24:28.034320 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:24:28.034497 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:24:28.034852 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:24:28.035397 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:24:28.035736 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:24:28.036257 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:24:28.036850 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:24:28.037014 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:24:28.037102 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:24:28.386155 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:24:28.571457 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:24:28.835239 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:24:28.967421 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:24:28.990941 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:24:28.991082 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:24:28.991148 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:24:29.149979 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:24:29.151782 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:24:29.151943 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:24:29.161108 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:24:29.161209 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:24:29.161316 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:24:29.172129 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:24:26.804321 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.304110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.461339 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.462208 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:28.316651 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:28.316701 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:30.876679 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:24:30.882671 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:24:30.883817 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:24:30.883842 2152365 api_server.go:131] duration metric: took 3.971920414s to wait for apiserver health ...
	I0819 13:24:30.883850 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:24:30.883876 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:30.883927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:30.921225 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:30.921255 2152365 cri.go:89] found id: ""
	I0819 13:24:30.921266 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:30.921336 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.925961 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:30.926033 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:30.963834 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:30.963861 2152365 cri.go:89] found id: ""
	I0819 13:24:30.963872 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:30.963947 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.968020 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:30.968085 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:31.004690 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.004717 2152365 cri.go:89] found id: ""
	I0819 13:24:31.004726 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:31.004784 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.008770 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:31.008832 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:31.045412 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.045440 2152365 cri.go:89] found id: ""
	I0819 13:24:31.045451 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:31.045508 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.049682 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:31.049759 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:31.090853 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.090946 2152365 cri.go:89] found id: ""
	I0819 13:24:31.090958 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:31.091037 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.095245 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:31.095311 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:31.138661 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.138693 2152365 cri.go:89] found id: ""
	I0819 13:24:31.138705 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:31.138775 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.143216 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:31.143283 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:31.186554 2152365 cri.go:89] found id: ""
	I0819 13:24:31.186580 2152365 logs.go:276] 0 containers: []
	W0819 13:24:31.186589 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:31.186595 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:31.186645 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:31.226218 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.226257 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:31.226261 2152365 cri.go:89] found id: ""
	I0819 13:24:31.226270 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:31.226333 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.230602 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.234269 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:31.234290 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:31.299071 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:31.299107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.336594 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:31.336633 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:31.745390 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:31.745441 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:31.797446 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:31.797479 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.838882 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:31.838916 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.876830 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:31.876871 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.935426 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:31.935461 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.975182 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:31.975212 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:31.989631 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:31.989663 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:32.093154 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:32.093190 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:32.142869 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:32.142906 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:32.179008 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:32.179043 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:34.726569 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:24:34.726604 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.726609 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.726613 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.726619 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.726622 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.726625 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.726633 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.726636 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.726644 2152365 system_pods.go:74] duration metric: took 3.84278789s to wait for pod list to return data ...
	I0819 13:24:34.726654 2152365 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:24:34.729710 2152365 default_sa.go:45] found service account: "default"
	I0819 13:24:34.729745 2152365 default_sa.go:55] duration metric: took 3.082291ms for default service account to be created ...
	I0819 13:24:34.729758 2152365 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:24:34.734698 2152365 system_pods.go:86] 8 kube-system pods found
	I0819 13:24:34.734734 2152365 system_pods.go:89] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.734744 2152365 system_pods.go:89] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.734752 2152365 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.734758 2152365 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.734765 2152365 system_pods.go:89] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.734768 2152365 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.734775 2152365 system_pods.go:89] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.734784 2152365 system_pods.go:89] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.734792 2152365 system_pods.go:126] duration metric: took 5.027633ms to wait for k8s-apps to be running ...
	I0819 13:24:34.734802 2152365 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:24:34.734863 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:34.756925 2152365 system_svc.go:56] duration metric: took 22.110614ms WaitForService to wait for kubelet
	I0819 13:24:34.756964 2152365 kubeadm.go:582] duration metric: took 4m23.367451416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:24:34.757007 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:24:34.760465 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:24:34.760497 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:24:34.760510 2152365 node_conditions.go:105] duration metric: took 3.498566ms to run NodePressure ...
	I0819 13:24:34.760524 2152365 start.go:241] waiting for startup goroutines ...
	I0819 13:24:34.760531 2152365 start.go:246] waiting for cluster config update ...
	I0819 13:24:34.760552 2152365 start.go:255] writing updated cluster config ...
	I0819 13:24:34.760888 2152365 ssh_runner.go:195] Run: rm -f paused
	I0819 13:24:34.814139 2152365 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:24:34.815906 2152365 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-038572" cluster and "default" namespace by default
	I0819 13:24:31.305059 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.804408 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.804797 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:31.462264 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.961404 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.961719 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.305696 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.804500 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.460369 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.462138 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.804768 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.804803 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.462412 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.961368 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:46.804973 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.304762 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:47.460607 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.461589 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.804815 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:54.304854 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.961120 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:53.962098 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:55.962219 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:56.804112 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:57.298511 2153523 pod_ready.go:82] duration metric: took 4m0.000785617s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:57.298548 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:24:57.298569 2153523 pod_ready.go:39] duration metric: took 4m6.439346081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:57.298603 2153523 kubeadm.go:597] duration metric: took 4m16.103432347s to restartPrimaryControlPlane
	W0819 13:24:57.298676 2153523 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:57.298707 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:58.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:00.961514 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:03.460885 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:05.461736 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:09.173035 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:25:09.173672 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:09.173913 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:07.962080 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:10.461273 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:14.174468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:14.174714 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:12.961725 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:15.461361 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:17.461731 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:19.961772 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:23.413286 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.114546859s)
	I0819 13:25:23.413389 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:23.440190 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:25:23.450707 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:25:23.462984 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:25:23.463003 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:25:23.463046 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:25:23.472381 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:25:23.472437 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:25:23.482090 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:25:23.491789 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:25:23.491845 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:25:23.501722 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.510879 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:25:23.510936 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.520464 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:25:23.529582 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:25:23.529642 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:25:23.539089 2153523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:25:23.585240 2153523 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:25:23.585319 2153523 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:25:23.712867 2153523 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:25:23.712994 2153523 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:25:23.713118 2153523 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:25:23.722519 2153523 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:25:23.724354 2153523 out.go:235]   - Generating certificates and keys ...
	I0819 13:25:23.724436 2153523 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:25:23.724517 2153523 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:25:23.724598 2153523 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:25:23.724648 2153523 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:25:23.724702 2153523 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:25:23.724745 2153523 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:25:23.724795 2153523 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:25:23.724847 2153523 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:25:23.724904 2153523 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:25:23.725051 2153523 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:25:23.725125 2153523 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:25:23.725238 2153523 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:25:23.779977 2153523 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:25:23.930648 2153523 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:25:24.124057 2153523 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:25:24.314884 2153523 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:25:24.494351 2153523 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:25:24.495024 2153523 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:25:24.497778 2153523 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:25:24.175179 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:24.175432 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:24.499543 2153523 out.go:235]   - Booting up control plane ...
	I0819 13:25:24.499671 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:25:24.499762 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:25:24.499864 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:25:24.520379 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:25:24.532777 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:25:24.532853 2153523 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:25:24.666433 2153523 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:25:24.666580 2153523 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:25:25.168265 2153523 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.973317ms
	I0819 13:25:25.168373 2153523 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:25:21.961823 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:24.462014 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:30.169290 2153523 kubeadm.go:310] [api-check] The API server is healthy after 5.001301539s
	I0819 13:25:30.182039 2153523 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:25:30.198499 2153523 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:25:30.224273 2153523 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:25:30.224513 2153523 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-823470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:25:30.236397 2153523 kubeadm.go:310] [bootstrap-token] Using token: uoaia1.lo1g4dbd1uwo2t3t
	I0819 13:25:30.237712 2153523 out.go:235]   - Configuring RBAC rules ...
	I0819 13:25:30.237846 2153523 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:25:30.245121 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:25:30.254416 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:25:30.257701 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:25:30.262104 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:25:30.266592 2153523 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:25:30.577306 2153523 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:25:26.961302 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:28.962130 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:31.051204 2153523 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:25:31.576457 2153523 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:25:31.577557 2153523 kubeadm.go:310] 
	I0819 13:25:31.577644 2153523 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:25:31.577653 2153523 kubeadm.go:310] 
	I0819 13:25:31.577728 2153523 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:25:31.577738 2153523 kubeadm.go:310] 
	I0819 13:25:31.577786 2153523 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:25:31.577872 2153523 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:25:31.577949 2153523 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:25:31.577959 2153523 kubeadm.go:310] 
	I0819 13:25:31.578041 2153523 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:25:31.578051 2153523 kubeadm.go:310] 
	I0819 13:25:31.578111 2153523 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:25:31.578120 2153523 kubeadm.go:310] 
	I0819 13:25:31.578174 2153523 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:25:31.578266 2153523 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:25:31.578354 2153523 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:25:31.578383 2153523 kubeadm.go:310] 
	I0819 13:25:31.578524 2153523 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:25:31.578624 2153523 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:25:31.578641 2153523 kubeadm.go:310] 
	I0819 13:25:31.578765 2153523 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.578886 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:25:31.578922 2153523 kubeadm.go:310] 	--control-plane 
	I0819 13:25:31.578931 2153523 kubeadm.go:310] 
	I0819 13:25:31.579045 2153523 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:25:31.579055 2153523 kubeadm.go:310] 
	I0819 13:25:31.579151 2153523 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.579277 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:25:31.580242 2153523 kubeadm.go:310] W0819 13:25:23.534803    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580611 2153523 kubeadm.go:310] W0819 13:25:23.535422    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580758 2153523 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:25:31.580793 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:25:31.580803 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:25:31.582322 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:25:31.583520 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:25:31.594967 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:25:31.613491 2153523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:25:31.613590 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-823470 minikube.k8s.io/updated_at=2024_08_19T13_25_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=embed-certs-823470 minikube.k8s.io/primary=true
	I0819 13:25:31.613591 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.644190 2153523 ops.go:34] apiserver oom_adj: -16
	I0819 13:25:31.824270 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.324994 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.824955 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.325186 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.824797 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.325222 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.824306 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.325248 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.825154 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.461327 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:33.461401 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:35.455588 2151869 pod_ready.go:82] duration metric: took 4m0.000741427s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	E0819 13:25:35.455630 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:25:35.455654 2151869 pod_ready.go:39] duration metric: took 4m14.499879559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:35.455689 2151869 kubeadm.go:597] duration metric: took 4m22.29129413s to restartPrimaryControlPlane
	W0819 13:25:35.455763 2151869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:25:35.455798 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:25:36.324463 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:36.464343 2153523 kubeadm.go:1113] duration metric: took 4.850819101s to wait for elevateKubeSystemPrivileges
	I0819 13:25:36.464399 2153523 kubeadm.go:394] duration metric: took 4m55.327792066s to StartCluster
	I0819 13:25:36.464441 2153523 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.464553 2153523 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:25:36.466407 2153523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.466694 2153523 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:25:36.466786 2153523 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:25:36.466883 2153523 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-823470"
	I0819 13:25:36.466903 2153523 addons.go:69] Setting default-storageclass=true in profile "embed-certs-823470"
	I0819 13:25:36.466908 2153523 addons.go:69] Setting metrics-server=true in profile "embed-certs-823470"
	I0819 13:25:36.466924 2153523 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-823470"
	W0819 13:25:36.466935 2153523 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:25:36.466943 2153523 addons.go:234] Setting addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:36.466949 2153523 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-823470"
	W0819 13:25:36.466961 2153523 addons.go:243] addon metrics-server should already be in state true
	I0819 13:25:36.466968 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:25:36.466987 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.466998 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.467388 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467421 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467422 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467424 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.468449 2153523 out.go:177] * Verifying Kubernetes components...
	I0819 13:25:36.469946 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:25:36.484726 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0819 13:25:36.484740 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I0819 13:25:36.484824 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0819 13:25:36.485224 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485280 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485303 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485822 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485841 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485824 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485881 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485956 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485982 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.486206 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486260 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486267 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486382 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.486787 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486803 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486819 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.486836 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.490147 2153523 addons.go:234] Setting addon default-storageclass=true in "embed-certs-823470"
	W0819 13:25:36.490169 2153523 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:25:36.490198 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.490599 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.490628 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.506245 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
	I0819 13:25:36.506573 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0819 13:25:36.506955 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507024 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507467 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507485 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507599 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507619 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507896 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.507899 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.508055 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.508657 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.508682 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.509546 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.511452 2153523 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:25:36.512660 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:25:36.512680 2153523 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:25:36.512702 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.515934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516401 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.516422 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516608 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.516810 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.517007 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.517160 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.517459 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0819 13:25:36.517826 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.518316 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.518339 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.518689 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.518859 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.520336 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.521729 2153523 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:25:36.522775 2153523 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.522793 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:25:36.522813 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.525839 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526467 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.526505 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526736 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.527191 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.527384 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.527540 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.527918 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0819 13:25:36.528354 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.528868 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.528890 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.529243 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.529418 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.530881 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.531125 2153523 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.531138 2153523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:25:36.531151 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.534168 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534579 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.534603 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534703 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.534855 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.534995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.535105 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.723562 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:25:36.740697 2153523 node_ready.go:35] waiting up to 6m0s for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749268 2153523 node_ready.go:49] node "embed-certs-823470" has status "Ready":"True"
	I0819 13:25:36.749294 2153523 node_ready.go:38] duration metric: took 8.560448ms for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749306 2153523 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:36.756208 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:36.829190 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.845456 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:25:36.845487 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:25:36.857828 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.928557 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:25:36.928590 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:25:36.979832 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:36.979864 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:25:37.086105 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:37.498852 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.498909 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498924 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499189 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499210 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499218 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499225 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499393 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499392 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499405 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499419 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499426 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499762 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499774 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499771 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499761 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499815 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499835 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510580 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.510601 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.510894 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.510922 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510924 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.721765 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.721791 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722119 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722143 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722171 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.722180 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722502 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722513 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722525 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722538 2153523 addons.go:475] Verifying addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:37.724029 2153523 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:25:37.725177 2153523 addons.go:510] duration metric: took 1.258399079s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:25:38.763132 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:40.765558 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:44.175962 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:44.176268 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:43.263762 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:45.765551 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:46.265160 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.265189 2153523 pod_ready.go:82] duration metric: took 9.50894907s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.265202 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270706 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.270733 2153523 pod_ready.go:82] duration metric: took 5.521974ms for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270745 2153523 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275061 2153523 pod_ready.go:93] pod "etcd-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.275095 2153523 pod_ready.go:82] duration metric: took 4.331701ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275106 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280251 2153523 pod_ready.go:93] pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.280269 2153523 pod_ready.go:82] duration metric: took 5.157209ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280278 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284437 2153523 pod_ready.go:93] pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.284456 2153523 pod_ready.go:82] duration metric: took 4.173028ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284464 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660549 2153523 pod_ready.go:93] pod "kube-proxy-cn9sl" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.660577 2153523 pod_ready.go:82] duration metric: took 376.106459ms for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660588 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060405 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:47.060432 2153523 pod_ready.go:82] duration metric: took 399.837833ms for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060440 2153523 pod_ready.go:39] duration metric: took 10.311121311s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:47.060458 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:25:47.060513 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:25:47.076832 2153523 api_server.go:72] duration metric: took 10.610087948s to wait for apiserver process to appear ...
	I0819 13:25:47.076869 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:25:47.076894 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:25:47.081037 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:25:47.081960 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:25:47.081984 2153523 api_server.go:131] duration metric: took 5.106291ms to wait for apiserver health ...
	I0819 13:25:47.081993 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:25:47.264787 2153523 system_pods.go:59] 9 kube-system pods found
	I0819 13:25:47.264817 2153523 system_pods.go:61] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.264822 2153523 system_pods.go:61] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.264826 2153523 system_pods.go:61] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.264830 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.264835 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.264839 2153523 system_pods.go:61] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.264842 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.264849 2153523 system_pods.go:61] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.264853 2153523 system_pods.go:61] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.264863 2153523 system_pods.go:74] duration metric: took 182.863608ms to wait for pod list to return data ...
	I0819 13:25:47.264870 2153523 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:25:47.461938 2153523 default_sa.go:45] found service account: "default"
	I0819 13:25:47.461968 2153523 default_sa.go:55] duration metric: took 197.091755ms for default service account to be created ...
	I0819 13:25:47.461978 2153523 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:25:47.664070 2153523 system_pods.go:86] 9 kube-system pods found
	I0819 13:25:47.664101 2153523 system_pods.go:89] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.664106 2153523 system_pods.go:89] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.664110 2153523 system_pods.go:89] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.664115 2153523 system_pods.go:89] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.664119 2153523 system_pods.go:89] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.664122 2153523 system_pods.go:89] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.664125 2153523 system_pods.go:89] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.664131 2153523 system_pods.go:89] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.664135 2153523 system_pods.go:89] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.664146 2153523 system_pods.go:126] duration metric: took 202.162032ms to wait for k8s-apps to be running ...
	I0819 13:25:47.664157 2153523 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:25:47.664259 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:47.680000 2153523 system_svc.go:56] duration metric: took 15.832426ms WaitForService to wait for kubelet
	I0819 13:25:47.680036 2153523 kubeadm.go:582] duration metric: took 11.213303025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:25:47.680063 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:25:47.861730 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:25:47.861762 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:25:47.861774 2153523 node_conditions.go:105] duration metric: took 181.706418ms to run NodePressure ...
	I0819 13:25:47.861786 2153523 start.go:241] waiting for startup goroutines ...
	I0819 13:25:47.861793 2153523 start.go:246] waiting for cluster config update ...
	I0819 13:25:47.861804 2153523 start.go:255] writing updated cluster config ...
	I0819 13:25:47.862066 2153523 ssh_runner.go:195] Run: rm -f paused
	I0819 13:25:47.913161 2153523 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:25:47.915029 2153523 out.go:177] * Done! kubectl is now configured to use "embed-certs-823470" cluster and "default" namespace by default
	I0819 13:26:01.591066 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.135241188s)
	I0819 13:26:01.591140 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:01.620641 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:26:01.633612 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:01.647057 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:01.647085 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:01.647144 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:01.663839 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:01.663903 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:01.681570 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:01.692967 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:01.693038 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:01.714743 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.723974 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:01.724044 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.734044 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:01.743276 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:01.743326 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:01.752586 2151869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:01.798481 2151869 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:26:01.798564 2151869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:01.909646 2151869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:01.909801 2151869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:01.909923 2151869 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:26:01.923525 2151869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:01.925556 2151869 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:01.925670 2151869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:01.925760 2151869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:01.925900 2151869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:01.925991 2151869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:01.926102 2151869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:01.926162 2151869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:01.926248 2151869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:01.926334 2151869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:01.926449 2151869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:01.926567 2151869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:01.926632 2151869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:01.926707 2151869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:02.049009 2151869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:02.310249 2151869 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:26:02.425638 2151869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:02.664112 2151869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:03.165747 2151869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:03.166413 2151869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:03.169075 2151869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:03.170768 2151869 out.go:235]   - Booting up control plane ...
	I0819 13:26:03.170893 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:03.171003 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:03.171184 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:03.190081 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:03.197988 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:03.198058 2151869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:03.327052 2151869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:26:03.327180 2151869 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:26:04.332623 2151869 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002268296s
	I0819 13:26:04.332744 2151869 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:26:09.332759 2151869 kubeadm.go:310] [api-check] The API server is healthy after 5.001673024s
	I0819 13:26:09.344645 2151869 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:26:09.359806 2151869 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:26:09.393264 2151869 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:26:09.393499 2151869 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-441627 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:26:09.413036 2151869 kubeadm.go:310] [bootstrap-token] Using token: cva6zd.b3089a5fajgqfd6r
	I0819 13:26:09.414921 2151869 out.go:235]   - Configuring RBAC rules ...
	I0819 13:26:09.415128 2151869 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:26:09.420470 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:26:09.431168 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:26:09.438188 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:26:09.443406 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:26:09.450497 2151869 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:26:09.737815 2151869 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:26:10.165062 2151869 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:26:10.738017 2151869 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:26:10.739202 2151869 kubeadm.go:310] 
	I0819 13:26:10.739309 2151869 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:26:10.739321 2151869 kubeadm.go:310] 
	I0819 13:26:10.739415 2151869 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:26:10.739426 2151869 kubeadm.go:310] 
	I0819 13:26:10.739469 2151869 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:26:10.739558 2151869 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:26:10.739645 2151869 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:26:10.739662 2151869 kubeadm.go:310] 
	I0819 13:26:10.739746 2151869 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:26:10.739758 2151869 kubeadm.go:310] 
	I0819 13:26:10.739833 2151869 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:26:10.739842 2151869 kubeadm.go:310] 
	I0819 13:26:10.739926 2151869 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:26:10.740039 2151869 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:26:10.740116 2151869 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:26:10.740134 2151869 kubeadm.go:310] 
	I0819 13:26:10.740272 2151869 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:26:10.740376 2151869 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:26:10.740387 2151869 kubeadm.go:310] 
	I0819 13:26:10.740487 2151869 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.740651 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:26:10.740689 2151869 kubeadm.go:310] 	--control-plane 
	I0819 13:26:10.740699 2151869 kubeadm.go:310] 
	I0819 13:26:10.740808 2151869 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:26:10.740822 2151869 kubeadm.go:310] 
	I0819 13:26:10.740931 2151869 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.741086 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:26:10.743032 2151869 kubeadm.go:310] W0819 13:26:01.774566    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743353 2151869 kubeadm.go:310] W0819 13:26:01.775291    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743487 2151869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:10.743516 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:26:10.743530 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:26:10.745175 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:26:10.746234 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:26:10.756423 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:26:10.774266 2151869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:26:10.774332 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:10.774349 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-441627 minikube.k8s.io/updated_at=2024_08_19T13_26_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=no-preload-441627 minikube.k8s.io/primary=true
	I0819 13:26:10.800516 2151869 ops.go:34] apiserver oom_adj: -16
	I0819 13:26:10.991369 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.492114 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.992301 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.491433 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.991812 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.492015 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.991704 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.492129 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.992304 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:15.079535 2151869 kubeadm.go:1113] duration metric: took 4.305270284s to wait for elevateKubeSystemPrivileges
	I0819 13:26:15.079607 2151869 kubeadm.go:394] duration metric: took 5m1.9669805s to StartCluster
	I0819 13:26:15.079635 2151869 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.079759 2151869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:26:15.081957 2151869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.082240 2151869 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:26:15.082282 2151869 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:26:15.082374 2151869 addons.go:69] Setting storage-provisioner=true in profile "no-preload-441627"
	I0819 13:26:15.082413 2151869 addons.go:234] Setting addon storage-provisioner=true in "no-preload-441627"
	I0819 13:26:15.082409 2151869 addons.go:69] Setting metrics-server=true in profile "no-preload-441627"
	W0819 13:26:15.082425 2151869 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:26:15.082389 2151869 addons.go:69] Setting default-storageclass=true in profile "no-preload-441627"
	I0819 13:26:15.082449 2151869 addons.go:234] Setting addon metrics-server=true in "no-preload-441627"
	I0819 13:26:15.082461 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	W0819 13:26:15.082463 2151869 addons.go:243] addon metrics-server should already be in state true
	I0819 13:26:15.082479 2151869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-441627"
	I0819 13:26:15.082503 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.082514 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082869 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082920 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082925 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082947 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.083691 2151869 out.go:177] * Verifying Kubernetes components...
	I0819 13:26:15.085493 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:26:15.101376 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
	I0819 13:26:15.102091 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.102685 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.102703 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.103169 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.103181 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0819 13:26:15.103193 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0819 13:26:15.103361 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.103663 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.103732 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.104300 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104322 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.104710 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104730 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.105081 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105409 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105905 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.105938 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.106084 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.106119 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.107248 2151869 addons.go:234] Setting addon default-storageclass=true in "no-preload-441627"
	W0819 13:26:15.107271 2151869 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:26:15.107302 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.107654 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.107713 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.122709 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0819 13:26:15.123128 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.123869 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.123898 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.124416 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.124634 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.126217 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I0819 13:26:15.126791 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.127400 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.127419 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.127698 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.128022 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.128336 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.129232 2151869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:26:15.130058 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.130226 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:26:15.130243 2151869 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:26:15.130262 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.131335 2151869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:26:15.131652 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45447
	I0819 13:26:15.132227 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.132570 2151869 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.132586 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:26:15.132605 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.133517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134218 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.134244 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134600 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.134738 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.134755 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.134852 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.135030 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.135097 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.135284 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.135673 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.135702 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.136748 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137177 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.137197 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137442 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.137613 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.137765 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.137924 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.154937 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0819 13:26:15.155407 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.155835 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.155847 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.156238 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.156416 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.158575 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.158795 2151869 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.158805 2151869 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:26:15.158818 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.162078 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162519 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.162534 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162825 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.162977 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.163114 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.163198 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.309062 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:26:15.341525 2151869 node_ready.go:35] waiting up to 6m0s for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354270 2151869 node_ready.go:49] node "no-preload-441627" has status "Ready":"True"
	I0819 13:26:15.354307 2151869 node_ready.go:38] duration metric: took 12.742932ms for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354320 2151869 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:15.363385 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:15.427138 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.429228 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.517567 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:26:15.517597 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:26:15.626356 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:26:15.626393 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:26:15.677045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677072 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677414 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677471 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.677483 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677490 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677729 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677749 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.683924 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.683943 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.684224 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.684249 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.684252 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:15.701949 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:15.701980 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:26:15.743920 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:16.110007 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110035 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110393 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110415 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.110426 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110439 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110691 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110712 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426076 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426617 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.426639 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426650 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426660 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426669 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426948 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426994 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.427011 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.427024 2151869 addons.go:475] Verifying addon metrics-server=true in "no-preload-441627"
	I0819 13:26:16.428916 2151869 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 13:26:16.430020 2151869 addons.go:510] duration metric: took 1.347734507s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 13:26:17.369079 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:19.369662 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:21.369721 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:23.368863 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.368886 2151869 pod_ready.go:82] duration metric: took 8.005467546s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.368895 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373851 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.373873 2151869 pod_ready.go:82] duration metric: took 4.971429ms for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373884 2151869 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378381 2151869 pod_ready.go:93] pod "etcd-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.378405 2151869 pod_ready.go:82] duration metric: took 4.513601ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378417 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382700 2151869 pod_ready.go:93] pod "kube-apiserver-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.382719 2151869 pod_ready.go:82] duration metric: took 4.295583ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382728 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387145 2151869 pod_ready.go:93] pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.387168 2151869 pod_ready.go:82] duration metric: took 4.434097ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387179 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768694 2151869 pod_ready.go:93] pod "kube-proxy-9jhss" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.768722 2151869 pod_ready.go:82] duration metric: took 381.534902ms for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768735 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167861 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:24.167888 2151869 pod_ready.go:82] duration metric: took 399.144976ms for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167896 2151869 pod_ready.go:39] duration metric: took 8.81356327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:24.167913 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:26:24.167967 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:26:24.187913 2151869 api_server.go:72] duration metric: took 9.105632249s to wait for apiserver process to appear ...
	I0819 13:26:24.187938 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:26:24.187960 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:26:24.192758 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:26:24.193797 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:26:24.193822 2151869 api_server.go:131] duration metric: took 5.87693ms to wait for apiserver health ...
	I0819 13:26:24.193833 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:26:24.371261 2151869 system_pods.go:59] 9 kube-system pods found
	I0819 13:26:24.371291 2151869 system_pods.go:61] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.371296 2151869 system_pods.go:61] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.371300 2151869 system_pods.go:61] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.371304 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.371308 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.371312 2151869 system_pods.go:61] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.371317 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.371325 2151869 system_pods.go:61] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.371331 2151869 system_pods.go:61] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.371342 2151869 system_pods.go:74] duration metric: took 177.503046ms to wait for pod list to return data ...
	I0819 13:26:24.371355 2151869 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:26:24.568039 2151869 default_sa.go:45] found service account: "default"
	I0819 13:26:24.568068 2151869 default_sa.go:55] duration metric: took 196.702869ms for default service account to be created ...
	I0819 13:26:24.568078 2151869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:26:24.771705 2151869 system_pods.go:86] 9 kube-system pods found
	I0819 13:26:24.771734 2151869 system_pods.go:89] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.771741 2151869 system_pods.go:89] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.771745 2151869 system_pods.go:89] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.771749 2151869 system_pods.go:89] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.771753 2151869 system_pods.go:89] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.771756 2151869 system_pods.go:89] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.771759 2151869 system_pods.go:89] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.771765 2151869 system_pods.go:89] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.771770 2151869 system_pods.go:89] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.771781 2151869 system_pods.go:126] duration metric: took 203.697331ms to wait for k8s-apps to be running ...
	I0819 13:26:24.771794 2151869 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:26:24.771837 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.788088 2151869 system_svc.go:56] duration metric: took 16.285619ms WaitForService to wait for kubelet
	I0819 13:26:24.788122 2151869 kubeadm.go:582] duration metric: took 9.705841529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:26:24.788157 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:26:24.968341 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:26:24.968377 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:26:24.968392 2151869 node_conditions.go:105] duration metric: took 180.211418ms to run NodePressure ...
	I0819 13:26:24.968409 2151869 start.go:241] waiting for startup goroutines ...
	I0819 13:26:24.968418 2151869 start.go:246] waiting for cluster config update ...
	I0819 13:26:24.968429 2151869 start.go:255] writing updated cluster config ...
	I0819 13:26:24.968747 2151869 ssh_runner.go:195] Run: rm -f paused
	I0819 13:26:25.025595 2151869 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:26:25.027536 2151869 out.go:177] * Done! kubectl is now configured to use "no-preload-441627" cluster and "default" namespace by default
	I0819 13:26:24.177907 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:26:24.178201 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:26:24.178214 2152780 kubeadm.go:310] 
	I0819 13:26:24.178291 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:26:24.178372 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:26:24.178382 2152780 kubeadm.go:310] 
	I0819 13:26:24.178424 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:26:24.178476 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:26:24.178656 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:26:24.178674 2152780 kubeadm.go:310] 
	I0819 13:26:24.178792 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:26:24.178834 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:26:24.178879 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:26:24.178885 2152780 kubeadm.go:310] 
	I0819 13:26:24.179039 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:26:24.179150 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:26:24.179160 2152780 kubeadm.go:310] 
	I0819 13:26:24.179343 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:26:24.179495 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:26:24.179612 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:26:24.179737 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:26:24.179753 2152780 kubeadm.go:310] 
	I0819 13:26:24.180394 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:24.180518 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:26:24.180601 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 13:26:24.180756 2152780 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 13:26:24.180807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:26:24.684112 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.697957 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:24.707579 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:24.707600 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:24.707649 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:24.717570 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:24.717642 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:24.727069 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:24.735692 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:24.735757 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:24.744522 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.753036 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:24.753090 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.763455 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:24.772780 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:24.772839 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:24.782898 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:24.856625 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:26:24.856732 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:25.003965 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:25.004118 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:25.004294 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:26:25.205556 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:25.207347 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:25.207462 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:25.207553 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:25.207671 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:25.207759 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:25.207840 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:25.207910 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:25.207978 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:25.208063 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:25.208526 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:25.209357 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:25.209668 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:25.209816 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:25.424000 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:25.667776 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:25.723895 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:25.817349 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:25.834327 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:25.836860 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:25.837016 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:25.992443 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:25.994085 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:26:25.994223 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:26.007880 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:26.009553 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:26.013310 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:26.014264 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:27:06.016907 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:27:06.017042 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:06.017274 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:11.017468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:11.017758 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:21.018328 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:21.018585 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:41.019394 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:41.019632 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018489 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:28:21.018784 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018807 2152780 kubeadm.go:310] 
	I0819 13:28:21.018879 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:28:21.018963 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:28:21.018971 2152780 kubeadm.go:310] 
	I0819 13:28:21.019000 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:28:21.019030 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:28:21.019122 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:28:21.019130 2152780 kubeadm.go:310] 
	I0819 13:28:21.019233 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:28:21.019276 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:28:21.019329 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:28:21.019337 2152780 kubeadm.go:310] 
	I0819 13:28:21.019471 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:28:21.019607 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:28:21.019620 2152780 kubeadm.go:310] 
	I0819 13:28:21.019776 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:28:21.019895 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:28:21.020016 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:28:21.020125 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:28:21.020139 2152780 kubeadm.go:310] 
	I0819 13:28:21.020943 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:28:21.021058 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:28:21.021158 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 13:28:21.021292 2152780 kubeadm.go:394] duration metric: took 7m56.942541209s to StartCluster
	I0819 13:28:21.021375 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:28:21.021456 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:28:21.068572 2152780 cri.go:89] found id: ""
	I0819 13:28:21.068613 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.068624 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:28:21.068633 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:28:21.068704 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:28:21.108484 2152780 cri.go:89] found id: ""
	I0819 13:28:21.108514 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.108532 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:28:21.108540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:28:21.108611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:28:21.151863 2152780 cri.go:89] found id: ""
	I0819 13:28:21.151894 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.151905 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:28:21.151914 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:28:21.151992 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:28:21.191844 2152780 cri.go:89] found id: ""
	I0819 13:28:21.191873 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.191883 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:28:21.191891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:28:21.191970 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:28:21.230545 2152780 cri.go:89] found id: ""
	I0819 13:28:21.230580 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.230588 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:28:21.230595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:28:21.230650 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:28:21.264157 2152780 cri.go:89] found id: ""
	I0819 13:28:21.264211 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.264225 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:28:21.264233 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:28:21.264301 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:28:21.297523 2152780 cri.go:89] found id: ""
	I0819 13:28:21.297558 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.297566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:28:21.297573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:28:21.297630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:28:21.336765 2152780 cri.go:89] found id: ""
	I0819 13:28:21.336793 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.336802 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:28:21.336815 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:28:21.336839 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:28:21.387668 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:28:21.387711 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:28:21.402131 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:28:21.402173 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:28:21.473587 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:28:21.473615 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:28:21.473632 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:28:21.576567 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:28:21.576610 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 13:28:21.614859 2152780 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 13:28:21.614919 2152780 out.go:270] * 
	W0819 13:28:21.615011 2152780 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.615032 2152780 out.go:270] * 
	W0819 13:28:21.615837 2152780 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:28:21.619494 2152780 out.go:201] 
	W0819 13:28:21.620743 2152780 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.620793 2152780 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 13:28:21.620813 2152780 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 13:28:21.622143 2152780 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.877782436Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074416877670919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18a8c75a-19ca-456f-87e6-e3c07b9f9f2d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.878353763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e659ff1-4adf-4d29-bd77-f22ade9bae02 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.878406014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e659ff1-4adf-4d29-bd77-f22ade9bae02 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.878614532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724073639967559345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cd76f5a61723309dd1e5a4796fc847240477d1940a00cb2aadaf65bbe25388,PodSandboxId:d9fdfe858c14993e6fb854ed1050ce8716d1a3542d42fb30238014a43462492c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724073617986323952,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 984019b3-bc63-4430-a519-0ed7bd8451e0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6,PodSandboxId:0fc6d816561d4db4569603073f98db07f5074233c9b5ad140082e57c68096dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073616842776437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8tl4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eec97b9-9cf7-48a1-9025-744336d756f8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226,PodSandboxId:f2df4fd240be105b7fd9a75ef003f89ecb16a1bd8b06c851a261b851fa94ee6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724073609105552862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wnkk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c07972-b
838-4523-ac86-3cd5dd597c04,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724073609078325229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-
a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872,PodSandboxId:1399a9b9e890d587027cf0714a54dce09e3b3661355dededabbfe5422755ff95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073605414658556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19667de952ca5ecbb6133a
19ca1c20b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27,PodSandboxId:c6e0c0b82e6e9d0aec54b2b0db99ec976ccb33a2b9cb925d26370701300d81f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073605382876962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 6ee53cb96a99c60287d81c0061088ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b,PodSandboxId:f4d07cd5c32173ce1b577dd221cdf94cbcbb6fa94d40d602ead1df9167a38efe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073605349751545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7cb8a1a886a4ea0c0ff3a670f86104e5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1,PodSandboxId:867295a3fc961bdc54e7bd34f50acea3b9ec75e86fb561e8033eb8cf8a04aaa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073605320239383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a681ecb5476da066a41bc2fbd21811
7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e659ff1-4adf-4d29-bd77-f22ade9bae02 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.920748637Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48fb617a-a45e-4ec4-b095-89045c550b4d name=/runtime.v1.RuntimeService/Version
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.920821880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48fb617a-a45e-4ec4-b095-89045c550b4d name=/runtime.v1.RuntimeService/Version
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.922250841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ceda314c-c6da-402d-8c29-7374fd8ce503 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.922637331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074416922616304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ceda314c-c6da-402d-8c29-7374fd8ce503 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.923345914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86c35695-4940-4f8c-b119-04d107d76451 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.923408376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86c35695-4940-4f8c-b119-04d107d76451 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.923628523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724073639967559345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cd76f5a61723309dd1e5a4796fc847240477d1940a00cb2aadaf65bbe25388,PodSandboxId:d9fdfe858c14993e6fb854ed1050ce8716d1a3542d42fb30238014a43462492c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724073617986323952,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 984019b3-bc63-4430-a519-0ed7bd8451e0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6,PodSandboxId:0fc6d816561d4db4569603073f98db07f5074233c9b5ad140082e57c68096dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073616842776437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8tl4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eec97b9-9cf7-48a1-9025-744336d756f8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226,PodSandboxId:f2df4fd240be105b7fd9a75ef003f89ecb16a1bd8b06c851a261b851fa94ee6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724073609105552862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wnkk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c07972-b
838-4523-ac86-3cd5dd597c04,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724073609078325229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-
a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872,PodSandboxId:1399a9b9e890d587027cf0714a54dce09e3b3661355dededabbfe5422755ff95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073605414658556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19667de952ca5ecbb6133a
19ca1c20b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27,PodSandboxId:c6e0c0b82e6e9d0aec54b2b0db99ec976ccb33a2b9cb925d26370701300d81f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073605382876962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 6ee53cb96a99c60287d81c0061088ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b,PodSandboxId:f4d07cd5c32173ce1b577dd221cdf94cbcbb6fa94d40d602ead1df9167a38efe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073605349751545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7cb8a1a886a4ea0c0ff3a670f86104e5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1,PodSandboxId:867295a3fc961bdc54e7bd34f50acea3b9ec75e86fb561e8033eb8cf8a04aaa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073605320239383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a681ecb5476da066a41bc2fbd21811
7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86c35695-4940-4f8c-b119-04d107d76451 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.972170127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6eebc184-8b0f-4786-83aa-c6fc14ed2402 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.972243013Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6eebc184-8b0f-4786-83aa-c6fc14ed2402 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.973781668Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3208d76-4e1d-4370-8372-807e034d596a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.974496777Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074416974464111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3208d76-4e1d-4370-8372-807e034d596a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.975214714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f24e652a-e439-4c25-a629-e9b342cc1f5f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.975275019Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f24e652a-e439-4c25-a629-e9b342cc1f5f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:33:36 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:36.975460028Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724073639967559345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cd76f5a61723309dd1e5a4796fc847240477d1940a00cb2aadaf65bbe25388,PodSandboxId:d9fdfe858c14993e6fb854ed1050ce8716d1a3542d42fb30238014a43462492c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724073617986323952,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 984019b3-bc63-4430-a519-0ed7bd8451e0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6,PodSandboxId:0fc6d816561d4db4569603073f98db07f5074233c9b5ad140082e57c68096dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073616842776437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8tl4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eec97b9-9cf7-48a1-9025-744336d756f8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226,PodSandboxId:f2df4fd240be105b7fd9a75ef003f89ecb16a1bd8b06c851a261b851fa94ee6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724073609105552862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wnkk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c07972-b
838-4523-ac86-3cd5dd597c04,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724073609078325229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-
a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872,PodSandboxId:1399a9b9e890d587027cf0714a54dce09e3b3661355dededabbfe5422755ff95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073605414658556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19667de952ca5ecbb6133a
19ca1c20b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27,PodSandboxId:c6e0c0b82e6e9d0aec54b2b0db99ec976ccb33a2b9cb925d26370701300d81f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073605382876962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 6ee53cb96a99c60287d81c0061088ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b,PodSandboxId:f4d07cd5c32173ce1b577dd221cdf94cbcbb6fa94d40d602ead1df9167a38efe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073605349751545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7cb8a1a886a4ea0c0ff3a670f86104e5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1,PodSandboxId:867295a3fc961bdc54e7bd34f50acea3b9ec75e86fb561e8033eb8cf8a04aaa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073605320239383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a681ecb5476da066a41bc2fbd21811
7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f24e652a-e439-4c25-a629-e9b342cc1f5f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:33:37 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:37.009273271Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3aee532a-8499-41d1-b6ed-7f56e62284e3 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:33:37 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:37.009471087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3aee532a-8499-41d1-b6ed-7f56e62284e3 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:33:37 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:37.010576623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29309605-754c-497a-9e8c-6ed4d1bcc267 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:33:37 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:37.010943907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074417010922169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29309605-754c-497a-9e8c-6ed4d1bcc267 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:33:37 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:37.011540790Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28fd81f8-15fa-406d-8fe5-63bf560f377b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:33:37 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:37.011591206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28fd81f8-15fa-406d-8fe5-63bf560f377b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:33:37 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:33:37.011774585Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724073639967559345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cd76f5a61723309dd1e5a4796fc847240477d1940a00cb2aadaf65bbe25388,PodSandboxId:d9fdfe858c14993e6fb854ed1050ce8716d1a3542d42fb30238014a43462492c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724073617986323952,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 984019b3-bc63-4430-a519-0ed7bd8451e0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6,PodSandboxId:0fc6d816561d4db4569603073f98db07f5074233c9b5ad140082e57c68096dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073616842776437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8tl4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eec97b9-9cf7-48a1-9025-744336d756f8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226,PodSandboxId:f2df4fd240be105b7fd9a75ef003f89ecb16a1bd8b06c851a261b851fa94ee6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724073609105552862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wnkk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c07972-b
838-4523-ac86-3cd5dd597c04,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724073609078325229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-
a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872,PodSandboxId:1399a9b9e890d587027cf0714a54dce09e3b3661355dededabbfe5422755ff95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073605414658556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19667de952ca5ecbb6133a
19ca1c20b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27,PodSandboxId:c6e0c0b82e6e9d0aec54b2b0db99ec976ccb33a2b9cb925d26370701300d81f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073605382876962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 6ee53cb96a99c60287d81c0061088ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b,PodSandboxId:f4d07cd5c32173ce1b577dd221cdf94cbcbb6fa94d40d602ead1df9167a38efe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073605349751545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7cb8a1a886a4ea0c0ff3a670f86104e5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1,PodSandboxId:867295a3fc961bdc54e7bd34f50acea3b9ec75e86fb561e8033eb8cf8a04aaa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073605320239383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a681ecb5476da066a41bc2fbd21811
7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28fd81f8-15fa-406d-8fe5-63bf560f377b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f547b541f49f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   2557921b1e5af       storage-provisioner
	f9cd76f5a6172       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   d9fdfe858c149       busybox
	0e12220c2cd13       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   0fc6d816561d4       coredns-6f6b679f8f-8tl4c
	0eea9deb6fa46       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   f2df4fd240be1       kube-proxy-wnkk2
	8613bb1986481       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   2557921b1e5af       storage-provisioner
	875409a409653       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   1399a9b9e890d       kube-scheduler-default-k8s-diff-port-038572
	90eea23cc577e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   c6e0c0b82e6e9       kube-controller-manager-default-k8s-diff-port-038572
	c977986999bce       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   f4d07cd5c3217       kube-apiserver-default-k8s-diff-port-038572
	9e2caf34bd614       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   867295a3fc961       etcd-default-k8s-diff-port-038572
	
	
	==> coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55117 - 14713 "HINFO IN 3375249004560547600.4191906258613393435. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014842328s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-038572
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-038572
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=default-k8s-diff-port-038572
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T13_12_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 13:12:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-038572
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 13:33:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 13:30:50 +0000   Mon, 19 Aug 2024 13:12:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 13:30:50 +0000   Mon, 19 Aug 2024 13:12:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 13:30:50 +0000   Mon, 19 Aug 2024 13:12:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 13:30:50 +0000   Mon, 19 Aug 2024 13:20:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.69
	  Hostname:    default-k8s-diff-port-038572
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3611796363d492a87c9864b44edb9db
	  System UUID:                e3611796-363d-492a-87c9-864b44edb9db
	  Boot ID:                    429e821f-4b53-4eb5-b3fa-7101007e33fe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-6f6b679f8f-8tl4c                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-038572                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-038572             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-038572    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-wnkk2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-038572             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-hmghm                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-038572 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-038572 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-038572 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-038572 status is now: NodeReady
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-038572 event: Registered Node default-k8s-diff-port-038572 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-038572 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-038572 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-038572 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-038572 event: Registered Node default-k8s-diff-port-038572 in Controller
	
	
	==> dmesg <==
	[Aug19 13:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050317] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041187] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.770120] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.548419] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.605071] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.928103] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.058915] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062557] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.181465] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.137365] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.291236] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[Aug19 13:20] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.060240] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.183138] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +4.585180] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.458945] systemd-fstab-generator[1565]: Ignoring "noauto" option for root device
	[  +3.303364] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.788745] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] <==
	{"level":"info","ts":"2024-08-19T13:20:07.152595Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T13:20:07.152629Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T13:20:07.152503Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:20:07.153905Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:20:07.154259Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:20:07.154744Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T13:20:07.155641Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.69:2379"}
	{"level":"info","ts":"2024-08-19T13:20:22.617358Z","caller":"traceutil/trace.go:171","msg":"trace[1341966766] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"131.026461ms","start":"2024-08-19T13:20:22.486317Z","end":"2024-08-19T13:20:22.617343Z","steps":["trace[1341966766] 'process raft request'  (duration: 130.870893ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T13:20:22.795784Z","caller":"traceutil/trace.go:171","msg":"trace[1219515661] linearizableReadLoop","detail":"{readStateIndex:638; appliedIndex:637; }","duration":"137.469375ms","start":"2024-08-19T13:20:22.658295Z","end":"2024-08-19T13:20:22.795764Z","steps":["trace[1219515661] 'read index received'  (duration: 111.835711ms)","trace[1219515661] 'applied index is now lower than readState.Index'  (duration: 25.632967ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T13:20:22.796127Z","caller":"traceutil/trace.go:171","msg":"trace[1221977632] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"167.37966ms","start":"2024-08-19T13:20:22.628729Z","end":"2024-08-19T13:20:22.796109Z","steps":["trace[1221977632] 'process raft request'  (duration: 141.47724ms)","trace[1221977632] 'compare'  (duration: 25.35211ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T13:20:22.796244Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.933036ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-038572\" ","response":"range_response_count:1 size:5736"}
	{"level":"info","ts":"2024-08-19T13:20:22.796666Z","caller":"traceutil/trace.go:171","msg":"trace[1504298882] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-038572; range_end:; response_count:1; response_revision:601; }","duration":"138.386941ms","start":"2024-08-19T13:20:22.658265Z","end":"2024-08-19T13:20:22.796652Z","steps":["trace[1504298882] 'agreement among raft nodes before linearized reading'  (duration: 137.900507ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T13:20:23.324619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.821503ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8983152294610211751 > lease_revoke:<id:7caa916ac43d1302>","response":"size:29"}
	{"level":"warn","ts":"2024-08-19T13:20:23.426883Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.110268ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8983152294610211752 > lease_revoke:<id:7caa916ac43d12a1>","response":"size:29"}
	{"level":"info","ts":"2024-08-19T13:21:08.056348Z","caller":"traceutil/trace.go:171","msg":"trace[1502655456] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"324.655104ms","start":"2024-08-19T13:21:07.731668Z","end":"2024-08-19T13:21:08.056323Z","steps":["trace[1502655456] 'process raft request'  (duration: 285.934465ms)","trace[1502655456] 'compare'  (duration: 37.843883ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T13:21:08.057513Z","caller":"traceutil/trace.go:171","msg":"trace[1952833897] linearizableReadLoop","detail":"{readStateIndex:687; appliedIndex:686; }","duration":"253.384215ms","start":"2024-08-19T13:21:07.802324Z","end":"2024-08-19T13:21:08.055709Z","steps":["trace[1952833897] 'read index received'  (duration: 215.240219ms)","trace[1952833897] 'applied index is now lower than readState.Index'  (duration: 38.143327ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T13:21:08.057859Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.511047ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-hmghm.17ed23d1ad906ee1\" ","response":"range_response_count:1 size:804"}
	{"level":"info","ts":"2024-08-19T13:21:08.057932Z","caller":"traceutil/trace.go:171","msg":"trace[345354892] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-6867b74b74-hmghm.17ed23d1ad906ee1; range_end:; response_count:1; response_revision:639; }","duration":"255.603254ms","start":"2024-08-19T13:21:07.802320Z","end":"2024-08-19T13:21:08.057923Z","steps":["trace[345354892] 'agreement among raft nodes before linearized reading'  (duration: 255.380523ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T13:21:08.059113Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T13:21:07.731655Z","time spent":"324.802273ms","remote":"127.0.0.1:49842","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4379,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-6867b74b74-hmghm\" mod_revision:625 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-hmghm\" value_size:4313 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-hmghm\" > >"}
	{"level":"warn","ts":"2024-08-19T13:21:08.059299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.701469ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-hmghm\" ","response":"range_response_count:1 size:4394"}
	{"level":"info","ts":"2024-08-19T13:21:08.059666Z","caller":"traceutil/trace.go:171","msg":"trace[367566211] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-hmghm; range_end:; response_count:1; response_revision:639; }","duration":"231.064528ms","start":"2024-08-19T13:21:07.828589Z","end":"2024-08-19T13:21:08.059653Z","steps":["trace[367566211] 'agreement among raft nodes before linearized reading'  (duration: 230.674701ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T13:21:10.224536Z","caller":"traceutil/trace.go:171","msg":"trace[1098891225] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"123.493196ms","start":"2024-08-19T13:21:10.101025Z","end":"2024-08-19T13:21:10.224518Z","steps":["trace[1098891225] 'process raft request'  (duration: 123.390338ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T13:30:07.178163Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":845}
	{"level":"info","ts":"2024-08-19T13:30:07.191938Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":845,"took":"13.419339ms","hash":1896233454,"current-db-size-bytes":2854912,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2854912,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-08-19T13:30:07.192134Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1896233454,"revision":845,"compact-revision":-1}
	
	
	==> kernel <==
	 13:33:37 up 13 min,  0 users,  load average: 0.08, 0.05, 0.06
	Linux default-k8s-diff-port-038572 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] <==
	W0819 13:30:09.453092       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:30:09.453212       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 13:30:09.454357       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:30:09.454419       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 13:31:09.454855       1 handler_proxy.go:99] no RequestInfo found in the context
	W0819 13:31:09.455062       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:31:09.455128       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0819 13:31:09.455141       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 13:31:09.456315       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:31:09.456375       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 13:33:09.456725       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:33:09.457072       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 13:33:09.457084       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:33:09.457204       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 13:33:09.458277       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:33:09.458309       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] <==
	E0819 13:28:12.080587       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:28:12.533914       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:28:42.087118       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:28:42.540896       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:29:12.093213       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:29:12.549707       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:29:42.099529       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:29:42.560050       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:30:12.108253       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:30:12.566695       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:30:42.115124       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:30:42.574679       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 13:30:50.634932       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-038572"
	I0819 13:30:57.741932       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="287.986µs"
	I0819 13:31:08.740304       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="155.984µs"
	E0819 13:31:12.121087       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:31:12.583769       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:31:42.127175       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:31:42.591242       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:32:12.138191       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:32:12.599928       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:32:42.146111       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:32:42.607746       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:33:12.153244       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:33:12.615056       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 13:20:09.284929       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 13:20:09.293053       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.69"]
	E0819 13:20:09.293112       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 13:20:09.326662       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 13:20:09.326708       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 13:20:09.326738       1 server_linux.go:169] "Using iptables Proxier"
	I0819 13:20:09.329171       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 13:20:09.329458       1 server.go:483] "Version info" version="v1.31.0"
	I0819 13:20:09.329482       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:20:09.331163       1 config.go:197] "Starting service config controller"
	I0819 13:20:09.331202       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 13:20:09.331226       1 config.go:104] "Starting endpoint slice config controller"
	I0819 13:20:09.331229       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 13:20:09.331930       1 config.go:326] "Starting node config controller"
	I0819 13:20:09.332021       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 13:20:09.432189       1 shared_informer.go:320] Caches are synced for node config
	I0819 13:20:09.432242       1 shared_informer.go:320] Caches are synced for service config
	I0819 13:20:09.432253       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] <==
	I0819 13:20:06.470700       1 serving.go:386] Generated self-signed cert in-memory
	W0819 13:20:08.442059       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 13:20:08.442157       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 13:20:08.442168       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 13:20:08.442175       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 13:20:08.478597       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 13:20:08.478646       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:20:08.483484       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 13:20:08.483534       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 13:20:08.484222       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 13:20:08.484295       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 13:20:08.584156       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 13:32:24 default-k8s-diff-port-038572 kubelet[945]: E0819 13:32:24.875002     945 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074344874077464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:32:33 default-k8s-diff-port-038572 kubelet[945]: E0819 13:32:33.726295     945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hmghm" podUID="73577eaa-4632-432b-8c5f-7421646873a3"
	Aug 19 13:32:34 default-k8s-diff-port-038572 kubelet[945]: E0819 13:32:34.876479     945 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074354875949304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:32:34 default-k8s-diff-port-038572 kubelet[945]: E0819 13:32:34.876758     945 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074354875949304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:32:44 default-k8s-diff-port-038572 kubelet[945]: E0819 13:32:44.879308     945 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074364878772009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:32:44 default-k8s-diff-port-038572 kubelet[945]: E0819 13:32:44.879695     945 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074364878772009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:32:46 default-k8s-diff-port-038572 kubelet[945]: E0819 13:32:46.724675     945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hmghm" podUID="73577eaa-4632-432b-8c5f-7421646873a3"
	Aug 19 13:32:54 default-k8s-diff-port-038572 kubelet[945]: E0819 13:32:54.882442     945 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074374881883140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:32:54 default-k8s-diff-port-038572 kubelet[945]: E0819 13:32:54.882762     945 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074374881883140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:33:00 default-k8s-diff-port-038572 kubelet[945]: E0819 13:33:00.725603     945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hmghm" podUID="73577eaa-4632-432b-8c5f-7421646873a3"
	Aug 19 13:33:04 default-k8s-diff-port-038572 kubelet[945]: E0819 13:33:04.738681     945 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 13:33:04 default-k8s-diff-port-038572 kubelet[945]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 13:33:04 default-k8s-diff-port-038572 kubelet[945]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 13:33:04 default-k8s-diff-port-038572 kubelet[945]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 13:33:04 default-k8s-diff-port-038572 kubelet[945]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 13:33:04 default-k8s-diff-port-038572 kubelet[945]: E0819 13:33:04.884839     945 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074384884550444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:33:04 default-k8s-diff-port-038572 kubelet[945]: E0819 13:33:04.884869     945 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074384884550444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:33:14 default-k8s-diff-port-038572 kubelet[945]: E0819 13:33:14.726899     945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hmghm" podUID="73577eaa-4632-432b-8c5f-7421646873a3"
	Aug 19 13:33:14 default-k8s-diff-port-038572 kubelet[945]: E0819 13:33:14.885879     945 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074394885602602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:33:14 default-k8s-diff-port-038572 kubelet[945]: E0819 13:33:14.885925     945 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074394885602602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:33:24 default-k8s-diff-port-038572 kubelet[945]: E0819 13:33:24.887685     945 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074404887330050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:33:24 default-k8s-diff-port-038572 kubelet[945]: E0819 13:33:24.888269     945 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074404887330050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:33:25 default-k8s-diff-port-038572 kubelet[945]: E0819 13:33:25.725241     945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hmghm" podUID="73577eaa-4632-432b-8c5f-7421646873a3"
	Aug 19 13:33:34 default-k8s-diff-port-038572 kubelet[945]: E0819 13:33:34.889761     945 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074414889406058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:33:34 default-k8s-diff-port-038572 kubelet[945]: E0819 13:33:34.889802     945 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074414889406058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] <==
	I0819 13:20:09.188775       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 13:20:39.196715       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] <==
	I0819 13:20:40.076345       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 13:20:40.094521       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 13:20:40.094709       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 13:20:57.499537       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 13:20:57.500212       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4aba67e4-6226-4657-89c8-b0ed902c3845", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-038572_13afe944-8f32-45ac-845a-21f59001e97d became leader
	I0819 13:20:57.500840       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-038572_13afe944-8f32-45ac-845a-21f59001e97d!
	I0819 13:20:57.603139       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-038572_13afe944-8f32-45ac-845a-21f59001e97d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-038572 -n default-k8s-diff-port-038572
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-038572 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-hmghm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-038572 describe pod metrics-server-6867b74b74-hmghm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-038572 describe pod metrics-server-6867b74b74-hmghm: exit status 1 (67.058546ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-hmghm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-038572 describe pod metrics-server-6867b74b74-hmghm: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0819 13:26:08.804956 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-823470 -n embed-certs-823470
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-19 13:34:48.441793458 +0000 UTC m=+5889.076583745
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-823470 -n embed-certs-823470
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-823470 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-823470 logs -n 25: (2.063164168s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-093052             | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-093052                  | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-093052 --memory=2200 --alsologtostderr   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-038572  | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC |                     |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-093052 image list                           | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-441627                  | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-620467        | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-823470            | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-038572       | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:24 UTC |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-620467             | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-823470                 | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC | 19 Aug 24 13:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 13:17:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 13:17:40.922225 2153523 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:17:40.922462 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922470 2153523 out.go:358] Setting ErrFile to fd 2...
	I0819 13:17:40.922475 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922631 2153523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:17:40.923154 2153523 out.go:352] Setting JSON to false
	I0819 13:17:40.924134 2153523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":165611,"bootTime":1723907850,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:17:40.924233 2153523 start.go:139] virtualization: kvm guest
	I0819 13:17:40.927254 2153523 out.go:177] * [embed-certs-823470] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:17:40.928415 2153523 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:17:40.928457 2153523 notify.go:220] Checking for updates...
	I0819 13:17:40.930421 2153523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:17:40.931440 2153523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:17:40.932454 2153523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:17:40.933815 2153523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:17:40.935189 2153523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:17:40.936761 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:17:40.937143 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.937209 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.952461 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0819 13:17:40.952868 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.953423 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.953455 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.953809 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.954009 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:40.954280 2153523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:17:40.954572 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.954610 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.970308 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0819 13:17:40.970901 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.971446 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.971478 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.971798 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.972002 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:41.006606 2153523 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 13:17:41.007689 2153523 start.go:297] selected driver: kvm2
	I0819 13:17:41.007712 2153523 start.go:901] validating driver "kvm2" against &{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.007878 2153523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:17:41.008964 2153523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.009068 2153523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:17:41.024433 2153523 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:17:41.024970 2153523 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:17:41.025027 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:17:41.025044 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:17:41.025112 2153523 start.go:340] cluster config:
	{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.025259 2153523 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.027074 2153523 out.go:177] * Starting "embed-certs-823470" primary control-plane node in "embed-certs-823470" cluster
	I0819 13:17:37.004434 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:41.028219 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:17:41.028273 2153523 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:17:41.028296 2153523 cache.go:56] Caching tarball of preloaded images
	I0819 13:17:41.028407 2153523 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:17:41.028424 2153523 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 13:17:41.028580 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:17:41.028861 2153523 start.go:360] acquireMachinesLock for embed-certs-823470: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:17:43.084440 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:46.156450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:52.236466 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:55.308454 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:01.388472 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:04.460482 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:10.540493 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:13.612449 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:19.692412 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:22.764436 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:28.844443 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:31.916476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:37.996470 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:41.068499 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:47.148488 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:50.220507 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:56.300450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:59.372511 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:05.452504 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:08.524521 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:14.604456 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:17.676406 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:23.756476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:26.828515 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:32.908445 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:35.980361 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:38.985083 2152365 start.go:364] duration metric: took 4m0.580875322s to acquireMachinesLock for "default-k8s-diff-port-038572"
	I0819 13:19:38.985152 2152365 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:38.985171 2152365 fix.go:54] fixHost starting: 
	I0819 13:19:38.985542 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:38.985578 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:39.001015 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I0819 13:19:39.001515 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:39.002024 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:19:39.002048 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:39.002379 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:39.002572 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:39.002734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:19:39.004320 2152365 fix.go:112] recreateIfNeeded on default-k8s-diff-port-038572: state=Stopped err=<nil>
	I0819 13:19:39.004361 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	W0819 13:19:39.004545 2152365 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:39.006073 2152365 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-038572" ...
	I0819 13:19:38.982266 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:38.982325 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982657 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:19:38.982688 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982924 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:19:38.984929 2151869 machine.go:96] duration metric: took 4m37.397817873s to provisionDockerMachine
	I0819 13:19:38.984974 2151869 fix.go:56] duration metric: took 4m37.421262151s for fixHost
	I0819 13:19:38.984981 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 4m37.421326953s
	W0819 13:19:38.985006 2151869 start.go:714] error starting host: provision: host is not running
	W0819 13:19:38.985119 2151869 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 13:19:38.985132 2151869 start.go:729] Will try again in 5 seconds ...
	I0819 13:19:39.007314 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Start
	I0819 13:19:39.007497 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring networks are active...
	I0819 13:19:39.008358 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network default is active
	I0819 13:19:39.008748 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network mk-default-k8s-diff-port-038572 is active
	I0819 13:19:39.009091 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Getting domain xml...
	I0819 13:19:39.009732 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Creating domain...
	I0819 13:19:40.228941 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting to get IP...
	I0819 13:19:40.229930 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230312 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230369 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.230279 2153980 retry.go:31] will retry after 222.94238ms: waiting for machine to come up
	I0819 13:19:40.455010 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455640 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.455562 2153980 retry.go:31] will retry after 314.241366ms: waiting for machine to come up
	I0819 13:19:40.771105 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771564 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771593 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.771511 2153980 retry.go:31] will retry after 475.191571ms: waiting for machine to come up
	I0819 13:19:41.248037 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248436 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248462 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.248377 2153980 retry.go:31] will retry after 547.125638ms: waiting for machine to come up
	I0819 13:19:41.797263 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797702 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.797653 2153980 retry.go:31] will retry after 581.6163ms: waiting for machine to come up
	I0819 13:19:42.380743 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381257 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:42.381195 2153980 retry.go:31] will retry after 661.080111ms: waiting for machine to come up
	I0819 13:19:43.044143 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044615 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044645 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.044566 2153980 retry.go:31] will retry after 815.359371ms: waiting for machine to come up
	I0819 13:19:43.986791 2151869 start.go:360] acquireMachinesLock for no-preload-441627: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:19:43.861792 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862277 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.862175 2153980 retry.go:31] will retry after 1.076851262s: waiting for machine to come up
	I0819 13:19:44.940944 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941544 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:44.941431 2153980 retry.go:31] will retry after 1.492394345s: waiting for machine to come up
	I0819 13:19:46.436200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436720 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:46.436617 2153980 retry.go:31] will retry after 1.567728586s: waiting for machine to come up
	I0819 13:19:48.006390 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006917 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:48.006810 2153980 retry.go:31] will retry after 1.799313771s: waiting for machine to come up
	I0819 13:19:49.807716 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808103 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808135 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:49.808029 2153980 retry.go:31] will retry after 2.910106345s: waiting for machine to come up
	I0819 13:19:52.722235 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722644 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:52.722554 2153980 retry.go:31] will retry after 2.846975794s: waiting for machine to come up
	I0819 13:19:56.801408 2152780 start.go:364] duration metric: took 3m10.884201023s to acquireMachinesLock for "old-k8s-version-620467"
	I0819 13:19:56.801488 2152780 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:56.801498 2152780 fix.go:54] fixHost starting: 
	I0819 13:19:56.802005 2152780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:56.802046 2152780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:56.819893 2152780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0819 13:19:56.820323 2152780 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:56.820785 2152780 main.go:141] libmachine: Using API Version  1
	I0819 13:19:56.820813 2152780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:56.821126 2152780 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:56.821314 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:19:56.821452 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetState
	I0819 13:19:56.823015 2152780 fix.go:112] recreateIfNeeded on old-k8s-version-620467: state=Stopped err=<nil>
	I0819 13:19:56.823056 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	W0819 13:19:56.823300 2152780 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:56.824822 2152780 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-620467" ...
	I0819 13:19:55.571761 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572317 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Found IP for machine: 192.168.72.69
	I0819 13:19:55.572351 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has current primary IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserving static IP address...
	I0819 13:19:55.572819 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.572863 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | skip adding static IP to network mk-default-k8s-diff-port-038572 - found existing host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"}
	I0819 13:19:55.572879 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserved static IP address: 192.168.72.69
	I0819 13:19:55.572897 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for SSH to be available...
	I0819 13:19:55.572914 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Getting to WaitForSSH function...
	I0819 13:19:55.575208 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575636 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.575662 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575751 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH client type: external
	I0819 13:19:55.575797 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa (-rw-------)
	I0819 13:19:55.575834 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:19:55.575851 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | About to run SSH command:
	I0819 13:19:55.575895 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | exit 0
	I0819 13:19:55.696082 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | SSH cmd err, output: <nil>: 
	I0819 13:19:55.696501 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetConfigRaw
	I0819 13:19:55.697226 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:55.699788 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700112 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.700146 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700429 2152365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/config.json ...
	I0819 13:19:55.700717 2152365 machine.go:93] provisionDockerMachine start ...
	I0819 13:19:55.700742 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:55.700975 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.703239 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703521 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.703556 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.703887 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704055 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.704360 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.704635 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.704651 2152365 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:19:55.804602 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:19:55.804632 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.804922 2152365 buildroot.go:166] provisioning hostname "default-k8s-diff-port-038572"
	I0819 13:19:55.804951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.805155 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.807793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.808144 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808318 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.808510 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808641 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808750 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.808888 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.809108 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.809123 2152365 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-038572 && echo "default-k8s-diff-port-038572" | sudo tee /etc/hostname
	I0819 13:19:55.922378 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-038572
	
	I0819 13:19:55.922428 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.925767 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926131 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.926172 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926350 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.926563 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926747 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.927130 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.927403 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.927433 2152365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-038572' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-038572/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-038572' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:19:56.033031 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:56.033082 2152365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:19:56.033108 2152365 buildroot.go:174] setting up certificates
	I0819 13:19:56.033121 2152365 provision.go:84] configureAuth start
	I0819 13:19:56.033132 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:56.033450 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.036371 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.036898 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.036926 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.037063 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.039494 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039814 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.039844 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039962 2152365 provision.go:143] copyHostCerts
	I0819 13:19:56.040055 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:19:56.040073 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:19:56.040181 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:19:56.040329 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:19:56.040342 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:19:56.040394 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:19:56.040582 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:19:56.040611 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:19:56.040666 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:19:56.040752 2152365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-038572 san=[127.0.0.1 192.168.72.69 default-k8s-diff-port-038572 localhost minikube]
	I0819 13:19:56.155224 2152365 provision.go:177] copyRemoteCerts
	I0819 13:19:56.155322 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:19:56.155366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.157977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158360 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.158393 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158573 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.158780 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.159002 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.159203 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.238700 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:19:56.263555 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0819 13:19:56.288212 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:19:56.311817 2152365 provision.go:87] duration metric: took 278.679594ms to configureAuth
	I0819 13:19:56.311851 2152365 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:19:56.312067 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:19:56.312187 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.315149 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315598 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.315630 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315801 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.315977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316115 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316244 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.316426 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.316653 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.316770 2152365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:19:56.575936 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:19:56.575977 2152365 machine.go:96] duration metric: took 875.241726ms to provisionDockerMachine
	I0819 13:19:56.576012 2152365 start.go:293] postStartSetup for "default-k8s-diff-port-038572" (driver="kvm2")
	I0819 13:19:56.576030 2152365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:19:56.576068 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.576426 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:19:56.576468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.579017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579420 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.579453 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579553 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.579726 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.579891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.580030 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.661017 2152365 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:19:56.665287 2152365 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:19:56.665315 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:19:56.665387 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:19:56.665469 2152365 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:19:56.665559 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:19:56.675115 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:19:56.698728 2152365 start.go:296] duration metric: took 122.697439ms for postStartSetup
	I0819 13:19:56.698771 2152365 fix.go:56] duration metric: took 17.713602497s for fixHost
	I0819 13:19:56.698793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.701384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701733 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.701766 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.702116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702284 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702444 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.702625 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.702822 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.702836 2152365 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:19:56.801168 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073596.772802095
	
	I0819 13:19:56.801210 2152365 fix.go:216] guest clock: 1724073596.772802095
	I0819 13:19:56.801223 2152365 fix.go:229] Guest: 2024-08-19 13:19:56.772802095 +0000 UTC Remote: 2024-08-19 13:19:56.698775345 +0000 UTC m=+258.438276725 (delta=74.02675ms)
	I0819 13:19:56.801278 2152365 fix.go:200] guest clock delta is within tolerance: 74.02675ms
	I0819 13:19:56.801287 2152365 start.go:83] releasing machines lock for "default-k8s-diff-port-038572", held for 17.81615508s
	I0819 13:19:56.801326 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.801669 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.804805 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805198 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.805242 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805377 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806035 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806291 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806387 2152365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:19:56.806442 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.806562 2152365 ssh_runner.go:195] Run: cat /version.json
	I0819 13:19:56.806592 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.809097 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809446 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.809476 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809513 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809599 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.809776 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.809951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810028 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.810053 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.810115 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.810256 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.810456 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.810627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810777 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.905573 2152365 ssh_runner.go:195] Run: systemctl --version
	I0819 13:19:56.911395 2152365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:19:57.053460 2152365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:19:57.060012 2152365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:19:57.060091 2152365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:19:57.076651 2152365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:19:57.076678 2152365 start.go:495] detecting cgroup driver to use...
	I0819 13:19:57.076746 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:19:57.095982 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:19:57.109198 2152365 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:19:57.109282 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:19:57.124552 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:19:57.141665 2152365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:19:57.250580 2152365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:19:57.395902 2152365 docker.go:233] disabling docker service ...
	I0819 13:19:57.395986 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:19:57.411232 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:19:57.424263 2152365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:19:57.576953 2152365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:19:57.695946 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:19:57.709501 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:19:57.727815 2152365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:19:57.727889 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.738210 2152365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:19:57.738284 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.748534 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.759949 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.770744 2152365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:19:57.781844 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.792047 2152365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.810916 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.826741 2152365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:19:57.839687 2152365 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:19:57.839753 2152365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:19:57.855192 2152365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:19:57.865326 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:19:57.991646 2152365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:19:58.136425 2152365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:19:58.136522 2152365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:19:58.141971 2152365 start.go:563] Will wait 60s for crictl version
	I0819 13:19:58.142047 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:19:58.146029 2152365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:19:58.182474 2152365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:19:58.182580 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.212887 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.242578 2152365 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:19:58.243711 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:58.246652 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.246985 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:58.247031 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.247262 2152365 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 13:19:58.251445 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:19:58.266177 2152365 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:19:58.266324 2152365 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:19:58.266417 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:19:56.826004 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .Start
	I0819 13:19:56.826161 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring networks are active...
	I0819 13:19:56.826953 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network default is active
	I0819 13:19:56.827328 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network mk-old-k8s-version-620467 is active
	I0819 13:19:56.827789 2152780 main.go:141] libmachine: (old-k8s-version-620467) Getting domain xml...
	I0819 13:19:56.828485 2152780 main.go:141] libmachine: (old-k8s-version-620467) Creating domain...
	I0819 13:19:58.099058 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting to get IP...
	I0819 13:19:58.099866 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.100256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.100359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.100246 2154107 retry.go:31] will retry after 217.468132ms: waiting for machine to come up
	I0819 13:19:58.319821 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.320335 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.320359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.320295 2154107 retry.go:31] will retry after 270.947251ms: waiting for machine to come up
	I0819 13:19:58.592897 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.593586 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.593610 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.593526 2154107 retry.go:31] will retry after 389.864812ms: waiting for machine to come up
	I0819 13:19:58.985269 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.985722 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.985757 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.985674 2154107 retry.go:31] will retry after 559.916587ms: waiting for machine to come up
	I0819 13:19:59.547846 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:59.548504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:59.548538 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:59.548446 2154107 retry.go:31] will retry after 759.198751ms: waiting for machine to come up
	I0819 13:20:00.309569 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:00.310093 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:00.310126 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:00.310041 2154107 retry.go:31] will retry after 836.488095ms: waiting for machine to come up
	I0819 13:19:58.306336 2152365 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:19:58.306490 2152365 ssh_runner.go:195] Run: which lz4
	I0819 13:19:58.311147 2152365 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:19:58.315715 2152365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:19:58.315751 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:19:59.751536 2152365 crio.go:462] duration metric: took 1.440444093s to copy over tarball
	I0819 13:19:59.751611 2152365 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:01.909604 2152365 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157959479s)
	I0819 13:20:01.909646 2152365 crio.go:469] duration metric: took 2.158078046s to extract the tarball
	I0819 13:20:01.909667 2152365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:01.954789 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:02.002755 2152365 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:02.002786 2152365 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:02.002805 2152365 kubeadm.go:934] updating node { 192.168.72.69 8444 v1.31.0 crio true true} ...
	I0819 13:20:02.002970 2152365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-038572 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:02.003061 2152365 ssh_runner.go:195] Run: crio config
	I0819 13:20:02.061207 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:02.061238 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:02.061256 2152365 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:02.061295 2152365 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.69 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-038572 NodeName:default-k8s-diff-port-038572 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:02.061496 2152365 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.69
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-038572"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:02.061628 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:02.072573 2152365 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:02.072676 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:02.083114 2152365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0819 13:20:02.100736 2152365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:02.117359 2152365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 13:20:02.135642 2152365 ssh_runner.go:195] Run: grep 192.168.72.69	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:02.139682 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:02.152242 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:02.278949 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:02.296896 2152365 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572 for IP: 192.168.72.69
	I0819 13:20:02.296925 2152365 certs.go:194] generating shared ca certs ...
	I0819 13:20:02.296949 2152365 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:02.297166 2152365 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:02.297230 2152365 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:02.297248 2152365 certs.go:256] generating profile certs ...
	I0819 13:20:02.297375 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/client.key
	I0819 13:20:02.297441 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key.ca317f39
	I0819 13:20:02.297489 2152365 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key
	I0819 13:20:02.297641 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:02.297692 2152365 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:02.297705 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:02.297730 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:02.297752 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:02.297778 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:02.297821 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:02.298812 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:02.332660 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:02.376995 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:02.418243 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:02.455354 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 13:20:02.488839 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:02.514418 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:02.540797 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:20:02.565628 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:02.590617 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:02.614942 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:02.639455 2152365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:02.657597 2152365 ssh_runner.go:195] Run: openssl version
	I0819 13:20:02.663545 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:02.674396 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679061 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679138 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.685136 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:02.696428 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:02.707271 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.711988 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.712052 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.717933 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:02.728982 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:02.739866 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744728 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744799 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.750660 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:02.761203 2152365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:02.765996 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:02.772371 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:02.778453 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:02.784645 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:02.790342 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:02.795882 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:02.801355 2152365 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:02.801448 2152365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:02.801536 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.839586 2152365 cri.go:89] found id: ""
	I0819 13:20:02.839684 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:02.850336 2152365 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:02.850363 2152365 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:02.850424 2152365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:02.860676 2152365 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:02.861753 2152365 kubeconfig.go:125] found "default-k8s-diff-port-038572" server: "https://192.168.72.69:8444"
	I0819 13:20:02.863979 2152365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:02.875284 2152365 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.69
	I0819 13:20:02.875321 2152365 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:02.875337 2152365 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:02.875398 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.914277 2152365 cri.go:89] found id: ""
	I0819 13:20:02.914383 2152365 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:02.931976 2152365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:02.941897 2152365 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:02.941924 2152365 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:02.941994 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 13:20:02.951103 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:02.951179 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:02.960642 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 13:20:02.970204 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:02.970278 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:02.980235 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.989306 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:02.989371 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.998821 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 13:20:03.007907 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:03.007978 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:03.017698 2152365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:03.027048 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:03.158491 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:01.148388 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:01.148845 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:01.148874 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:01.148801 2154107 retry.go:31] will retry after 1.045849266s: waiting for machine to come up
	I0819 13:20:02.196768 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:02.197307 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:02.197341 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:02.197243 2154107 retry.go:31] will retry after 1.09397017s: waiting for machine to come up
	I0819 13:20:03.292733 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:03.293195 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:03.293233 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:03.293141 2154107 retry.go:31] will retry after 1.29393597s: waiting for machine to come up
	I0819 13:20:04.588965 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:04.589429 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:04.589463 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:04.589356 2154107 retry.go:31] will retry after 1.963033352s: waiting for machine to come up
	I0819 13:20:04.318435 2152365 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.159898195s)
	I0819 13:20:04.318475 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.536005 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.600370 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.701830 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:04.701954 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.202875 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.702925 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.718529 2152365 api_server.go:72] duration metric: took 1.01671668s to wait for apiserver process to appear ...
	I0819 13:20:05.718570 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:05.718598 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.393400 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.393440 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.393459 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.434960 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.434992 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.719450 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.725655 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:08.725689 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.218770 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.224234 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:09.224273 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.718902 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.727048 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:20:09.733609 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:09.733641 2152365 api_server.go:131] duration metric: took 4.015063389s to wait for apiserver health ...
	I0819 13:20:09.733652 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:09.733660 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:09.735422 2152365 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:06.553692 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:06.554176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:06.554208 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:06.554119 2154107 retry.go:31] will retry after 2.420122298s: waiting for machine to come up
	I0819 13:20:08.975682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:08.976129 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:08.976176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:08.976073 2154107 retry.go:31] will retry after 2.64333029s: waiting for machine to come up
	I0819 13:20:09.736901 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:09.747568 2152365 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:09.766156 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:09.777905 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:09.777944 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:09.777953 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:09.777963 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:09.777978 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:09.777990 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:20:09.778006 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:09.778017 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:09.778029 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:20:09.778040 2152365 system_pods.go:74] duration metric: took 11.855811ms to wait for pod list to return data ...
	I0819 13:20:09.778050 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:09.787946 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:09.787991 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:09.788010 2152365 node_conditions.go:105] duration metric: took 9.95462ms to run NodePressure ...
	I0819 13:20:09.788035 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:10.079231 2152365 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083839 2152365 kubeadm.go:739] kubelet initialised
	I0819 13:20:10.083865 2152365 kubeadm.go:740] duration metric: took 4.599976ms waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083876 2152365 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:10.089020 2152365 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.094701 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094738 2152365 pod_ready.go:82] duration metric: took 5.686754ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.094751 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094760 2152365 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.099257 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099292 2152365 pod_ready.go:82] duration metric: took 4.514562ms for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.099308 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099317 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.103745 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103778 2152365 pod_ready.go:82] duration metric: took 4.449246ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.103799 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103811 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.170603 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170636 2152365 pod_ready.go:82] duration metric: took 66.80935ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.170649 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170655 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.569837 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569869 2152365 pod_ready.go:82] duration metric: took 399.203757ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.569879 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569892 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.970515 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970553 2152365 pod_ready.go:82] duration metric: took 400.651091ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.970571 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970600 2152365 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:11.371258 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371306 2152365 pod_ready.go:82] duration metric: took 400.692872ms for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:11.371324 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371335 2152365 pod_ready.go:39] duration metric: took 1.287441542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:11.371365 2152365 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:20:11.387535 2152365 ops.go:34] apiserver oom_adj: -16
	I0819 13:20:11.387559 2152365 kubeadm.go:597] duration metric: took 8.537187666s to restartPrimaryControlPlane
	I0819 13:20:11.387568 2152365 kubeadm.go:394] duration metric: took 8.586226152s to StartCluster
	I0819 13:20:11.387585 2152365 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.387664 2152365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:11.389218 2152365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.389474 2152365 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:20:11.389580 2152365 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:20:11.389706 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:11.389711 2152365 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389755 2152365 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-038572"
	I0819 13:20:11.389764 2152365 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-038572"
	W0819 13:20:11.389767 2152365 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:20:11.389751 2152365 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389804 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.389806 2152365 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.389818 2152365 addons.go:243] addon metrics-server should already be in state true
	I0819 13:20:11.389827 2152365 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-038572"
	I0819 13:20:11.389849 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.390217 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390246 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390254 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390283 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390305 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390341 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.391417 2152365 out.go:177] * Verifying Kubernetes components...
	I0819 13:20:11.392645 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:11.406609 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0819 13:20:11.407081 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.407694 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.407723 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.408143 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.408421 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.411206 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0819 13:20:11.411468 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0819 13:20:11.411740 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.411900 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.412296 2152365 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.412319 2152365 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:20:11.412335 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412351 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.412393 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412482 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412508 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412744 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.412758 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.412793 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.412794 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.413326 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413350 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413374 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.413399 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.429280 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40269
	I0819 13:20:11.429783 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.430330 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.430354 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.430797 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.430981 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.433018 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.433214 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I0819 13:20:11.433651 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.433708 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0819 13:20:11.434138 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434152 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434199 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.434455 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.434636 2152365 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:11.434806 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434830 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434988 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.435019 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.435164 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.435375 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.435924 2152365 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.435944 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:20:11.435965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.437309 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.438592 2152365 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:20:11.439482 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.439584 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:20:11.439598 2152365 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:20:11.439617 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.439905 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.439943 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.440077 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.440243 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.440399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.440517 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.442627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.442965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.442990 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.443134 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.443274 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.443500 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.443627 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.457318 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0819 13:20:11.457870 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.458444 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.458481 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.458854 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.459017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.460870 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.461105 2152365 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.461121 2152365 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:20:11.461138 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.463902 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.464415 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464581 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.464764 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.464942 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.465089 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.620783 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:11.641158 2152365 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:11.745067 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.783221 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:20:11.783260 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:20:11.791134 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.848325 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:20:11.848359 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:20:11.955559 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:11.955670 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:20:12.022022 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:12.892823 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.101637914s)
	I0819 13:20:12.892896 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.892909 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.892966 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147858081s)
	I0819 13:20:12.893019 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893033 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893201 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893327 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893354 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893366 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893375 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893377 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893386 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893391 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893630 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893681 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893704 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893677 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893729 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893657 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.899954 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.899970 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.900202 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.900220 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.900223 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972429 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972465 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.972842 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972866 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.972882 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.972897 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.973181 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.973225 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.973235 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.973289 2152365 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-038572"
	I0819 13:20:12.975602 2152365 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:20:12.976614 2152365 addons.go:510] duration metric: took 1.587039687s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:20:15.704949 2153523 start.go:364] duration metric: took 2m34.676043713s to acquireMachinesLock for "embed-certs-823470"
	I0819 13:20:15.705034 2153523 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:15.705058 2153523 fix.go:54] fixHost starting: 
	I0819 13:20:15.705598 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:15.705649 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:15.726423 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0819 13:20:15.726876 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:15.727565 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:20:15.727603 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:15.727986 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:15.728273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:15.728457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:20:15.730123 2153523 fix.go:112] recreateIfNeeded on embed-certs-823470: state=Stopped err=<nil>
	I0819 13:20:15.730167 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	W0819 13:20:15.730354 2153523 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:15.732004 2153523 out.go:177] * Restarting existing kvm2 VM for "embed-certs-823470" ...
	I0819 13:20:11.621327 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:11.621751 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:11.621783 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:11.621716 2154107 retry.go:31] will retry after 2.756598641s: waiting for machine to come up
	I0819 13:20:14.379562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380111 2152780 main.go:141] libmachine: (old-k8s-version-620467) Found IP for machine: 192.168.39.238
	I0819 13:20:14.380135 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserving static IP address...
	I0819 13:20:14.380149 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has current primary IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380616 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.380648 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserved static IP address: 192.168.39.238
	I0819 13:20:14.380673 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | skip adding static IP to network mk-old-k8s-version-620467 - found existing host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"}
	I0819 13:20:14.380687 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Getting to WaitForSSH function...
	I0819 13:20:14.380699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting for SSH to be available...
	I0819 13:20:14.383088 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383484 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.383516 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383650 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH client type: external
	I0819 13:20:14.383682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa (-rw-------)
	I0819 13:20:14.383720 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:14.383731 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | About to run SSH command:
	I0819 13:20:14.383745 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | exit 0
	I0819 13:20:14.508402 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:14.508766 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetConfigRaw
	I0819 13:20:14.509427 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.512053 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.512573 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512759 2152780 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/config.json ...
	I0819 13:20:14.512944 2152780 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:14.512962 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:14.513159 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.515266 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515554 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.515588 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515743 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.515901 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516039 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516142 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.516308 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.516582 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.516597 2152780 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:14.628611 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:14.628650 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.628968 2152780 buildroot.go:166] provisioning hostname "old-k8s-version-620467"
	I0819 13:20:14.629003 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.629202 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.631844 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632148 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.632211 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632516 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.632706 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632870 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632975 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.633140 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.633383 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.633400 2152780 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-620467 && echo "old-k8s-version-620467" | sudo tee /etc/hostname
	I0819 13:20:14.759803 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-620467
	
	I0819 13:20:14.759840 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.763058 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763513 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.763553 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763807 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.764027 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764208 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764523 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.764729 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.764986 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.765013 2152780 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-620467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-620467/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-620467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:14.890791 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:14.890828 2152780 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:14.890861 2152780 buildroot.go:174] setting up certificates
	I0819 13:20:14.890877 2152780 provision.go:84] configureAuth start
	I0819 13:20:14.890890 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.891337 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.894171 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894527 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.894562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894695 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.897240 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.897791 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.897825 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.898011 2152780 provision.go:143] copyHostCerts
	I0819 13:20:14.898080 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:14.898104 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:14.898169 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:14.898317 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:14.898330 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:14.898360 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:14.898434 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:14.898444 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:14.898472 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:14.898542 2152780 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-620467 san=[127.0.0.1 192.168.39.238 localhost minikube old-k8s-version-620467]
	I0819 13:20:15.009796 2152780 provision.go:177] copyRemoteCerts
	I0819 13:20:15.009880 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:15.009931 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.012857 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013153 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.013187 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013486 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.013699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.013862 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.013990 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.104066 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:15.131682 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 13:20:15.158717 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:15.186486 2152780 provision.go:87] duration metric: took 295.591794ms to configureAuth
	I0819 13:20:15.186518 2152780 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:15.186739 2152780 config.go:182] Loaded profile config "old-k8s-version-620467": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 13:20:15.186833 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.189580 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.189997 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.190028 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.190212 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.190389 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190528 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190646 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.190784 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.190958 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.190973 2152780 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:15.461836 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:15.461866 2152780 machine.go:96] duration metric: took 948.910153ms to provisionDockerMachine
	I0819 13:20:15.461878 2152780 start.go:293] postStartSetup for "old-k8s-version-620467" (driver="kvm2")
	I0819 13:20:15.461894 2152780 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:15.461913 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.462313 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:15.462348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.465294 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465707 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.465743 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465949 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.466136 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.466302 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.466443 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.551428 2152780 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:15.556041 2152780 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:15.556063 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:15.556137 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:15.556267 2152780 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:15.556410 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:15.565971 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:15.592103 2152780 start.go:296] duration metric: took 130.206992ms for postStartSetup
	I0819 13:20:15.592153 2152780 fix.go:56] duration metric: took 18.790655174s for fixHost
	I0819 13:20:15.592201 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.595223 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595583 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.595623 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595769 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.596033 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596249 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596391 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.596578 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.596767 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.596780 2152780 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:15.704773 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073615.677156491
	
	I0819 13:20:15.704802 2152780 fix.go:216] guest clock: 1724073615.677156491
	I0819 13:20:15.704811 2152780 fix.go:229] Guest: 2024-08-19 13:20:15.677156491 +0000 UTC Remote: 2024-08-19 13:20:15.592158206 +0000 UTC m=+209.822213223 (delta=84.998285ms)
	I0819 13:20:15.704854 2152780 fix.go:200] guest clock delta is within tolerance: 84.998285ms
	I0819 13:20:15.704862 2152780 start.go:83] releasing machines lock for "old-k8s-version-620467", held for 18.903410478s
	I0819 13:20:15.704893 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.705211 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:15.708190 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708528 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.708558 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708733 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709538 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709631 2152780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:15.709682 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.709793 2152780 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:15.709821 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.712366 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712759 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.712787 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712818 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712943 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713113 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713309 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.713374 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.713407 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.713481 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.713573 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713714 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713872 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.714033 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.732993 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Start
	I0819 13:20:15.733180 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring networks are active...
	I0819 13:20:15.734170 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network default is active
	I0819 13:20:15.734566 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network mk-embed-certs-823470 is active
	I0819 13:20:15.734992 2153523 main.go:141] libmachine: (embed-certs-823470) Getting domain xml...
	I0819 13:20:15.735675 2153523 main.go:141] libmachine: (embed-certs-823470) Creating domain...
	I0819 13:20:15.818839 2152780 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:15.827091 2152780 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:15.989552 2152780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:15.995991 2152780 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:15.996067 2152780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:16.012594 2152780 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:16.012622 2152780 start.go:495] detecting cgroup driver to use...
	I0819 13:20:16.012691 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:16.030071 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:16.048651 2152780 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:16.048715 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:16.063500 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:16.078370 2152780 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:16.194353 2152780 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:16.347921 2152780 docker.go:233] disabling docker service ...
	I0819 13:20:16.347992 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:16.363528 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:16.377515 2152780 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:16.526660 2152780 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:16.672207 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:16.686323 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:16.707555 2152780 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 13:20:16.707634 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.718679 2152780 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:16.718775 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.729446 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.740145 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.751813 2152780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:16.767680 2152780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:16.779680 2152780 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:16.779754 2152780 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:16.798205 2152780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:16.812684 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:16.999666 2152780 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:17.159585 2152780 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:17.159673 2152780 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:17.165707 2152780 start.go:563] Will wait 60s for crictl version
	I0819 13:20:17.165781 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:17.170949 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:17.222170 2152780 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:17.222248 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.250627 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.278578 2152780 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 13:20:13.644589 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:15.645507 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:18.146247 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:17.279632 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:17.282855 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:17.283289 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283501 2152780 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:17.288251 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:17.301778 2152780 kubeadm.go:883] updating cluster {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:17.301950 2152780 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 13:20:17.302022 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:17.348389 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:17.348477 2152780 ssh_runner.go:195] Run: which lz4
	I0819 13:20:17.352759 2152780 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:17.357046 2152780 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:17.357075 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 13:20:19.106587 2152780 crio.go:462] duration metric: took 1.753868264s to copy over tarball
	I0819 13:20:19.106686 2152780 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:17.131501 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting to get IP...
	I0819 13:20:17.132786 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.133215 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.133322 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.133191 2154333 retry.go:31] will retry after 299.991818ms: waiting for machine to come up
	I0819 13:20:17.434764 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.435181 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.435207 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.435148 2154333 retry.go:31] will retry after 236.856343ms: waiting for machine to come up
	I0819 13:20:17.673895 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.674570 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.674599 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.674464 2154333 retry.go:31] will retry after 393.585081ms: waiting for machine to come up
	I0819 13:20:18.070571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.071042 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.071102 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.070993 2154333 retry.go:31] will retry after 544.808617ms: waiting for machine to come up
	I0819 13:20:18.617933 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.618430 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.618462 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.618383 2154333 retry.go:31] will retry after 679.633919ms: waiting for machine to come up
	I0819 13:20:19.299719 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.300300 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.300332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.300244 2154333 retry.go:31] will retry after 612.319499ms: waiting for machine to come up
	I0819 13:20:19.914955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.915514 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.915550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.915448 2154333 retry.go:31] will retry after 831.709673ms: waiting for machine to come up
	I0819 13:20:20.748907 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:20.749441 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:20.749470 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:20.749386 2154333 retry.go:31] will retry after 1.039050841s: waiting for machine to come up
	I0819 13:20:19.147118 2152365 node_ready.go:49] node "default-k8s-diff-port-038572" has status "Ready":"True"
	I0819 13:20:19.147148 2152365 node_ready.go:38] duration metric: took 7.505954816s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:19.147159 2152365 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:19.156331 2152365 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.164963 2152365 pod_ready.go:93] pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:19.164994 2152365 pod_ready.go:82] duration metric: took 8.624394ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.165007 2152365 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:21.171762 2152365 pod_ready.go:103] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:22.814104 2152365 pod_ready.go:93] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.814140 2152365 pod_ready.go:82] duration metric: took 3.649123155s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.814157 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820468 2152365 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.820494 2152365 pod_ready.go:82] duration metric: took 6.328831ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820505 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826256 2152365 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.826288 2152365 pod_ready.go:82] duration metric: took 5.775237ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826302 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832133 2152365 pod_ready.go:93] pod "kube-proxy-wnkk2" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.832182 2152365 pod_ready.go:82] duration metric: took 5.850742ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832198 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837910 2152365 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.837938 2152365 pod_ready.go:82] duration metric: took 5.730926ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837952 2152365 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.098419 2152780 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.991694091s)
	I0819 13:20:22.098466 2152780 crio.go:469] duration metric: took 2.991838198s to extract the tarball
	I0819 13:20:22.098476 2152780 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:22.143215 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:22.183884 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:22.183917 2152780 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:22.183992 2152780 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.184012 2152780 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.184040 2152780 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 13:20:22.184050 2152780 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.183999 2152780 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.184002 2152780 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.184087 2152780 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.184133 2152780 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185819 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.185861 2152780 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 13:20:22.185868 2152780 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.185909 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.185940 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185825 2152780 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.355834 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.356969 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.361175 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.382402 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.393972 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.417644 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 13:20:22.451269 2152780 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 13:20:22.451335 2152780 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.451333 2152780 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 13:20:22.451374 2152780 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.451393 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.451428 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.479570 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.490369 2152780 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 13:20:22.490426 2152780 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.490484 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.519930 2152780 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 13:20:22.519990 2152780 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.520062 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.527794 2152780 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 13:20:22.527836 2152780 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.527922 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540287 2152780 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 13:20:22.540337 2152780 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 13:20:22.540362 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.540381 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540423 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665204 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.665271 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.665317 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.665410 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665441 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.665481 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.725932 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.821227 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.825977 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.826179 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.826276 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.826365 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.826517 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.908181 2152780 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 13:20:22.908249 2152780 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.908306 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.974136 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:23.008192 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 13:20:23.008282 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:23.008318 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:23.008372 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 13:20:23.008406 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:23.008416 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.041323 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 13:20:23.119173 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 13:20:23.119212 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.119216 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 13:20:23.119787 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 13:20:23.158596 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.193414 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 13:20:23.193510 2152780 cache_images.go:92] duration metric: took 1.009576721s to LoadCachedImages
	W0819 13:20:23.193613 2152780 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0819 13:20:23.193632 2152780 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.20.0 crio true true} ...
	I0819 13:20:23.193746 2152780 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-620467 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:23.193835 2152780 ssh_runner.go:195] Run: crio config
	I0819 13:20:23.255630 2152780 cni.go:84] Creating CNI manager for ""
	I0819 13:20:23.255661 2152780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:23.255677 2152780 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:23.255711 2152780 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-620467 NodeName:old-k8s-version-620467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 13:20:23.255897 2152780 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-620467"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:23.255980 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 13:20:23.269531 2152780 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:23.269612 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:23.282542 2152780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0819 13:20:23.302755 2152780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:23.322417 2152780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 13:20:23.342984 2152780 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:23.347053 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:23.360095 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:23.497495 2152780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:23.517984 2152780 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467 for IP: 192.168.39.238
	I0819 13:20:23.518059 2152780 certs.go:194] generating shared ca certs ...
	I0819 13:20:23.518125 2152780 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:23.518359 2152780 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:23.518429 2152780 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:23.518442 2152780 certs.go:256] generating profile certs ...
	I0819 13:20:23.518572 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.key
	I0819 13:20:23.518651 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key.53723c14
	I0819 13:20:23.518701 2152780 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key
	I0819 13:20:23.518856 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:23.518895 2152780 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:23.518909 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:23.518941 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:23.518969 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:23.519009 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:23.519073 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:23.519828 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:23.574690 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:23.616732 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:23.659531 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:23.706724 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 13:20:23.742394 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:23.774793 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:23.801731 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:23.827344 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:23.855338 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:23.880700 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:23.906324 2152780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:23.923792 2152780 ssh_runner.go:195] Run: openssl version
	I0819 13:20:23.929722 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:23.940623 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946214 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946316 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.952580 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:23.963550 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:23.974639 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979429 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979515 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.985465 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:23.996822 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:24.008876 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013845 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013909 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.019736 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:24.032573 2152780 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:24.038620 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:24.046070 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:24.052573 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:24.059242 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:24.065487 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:24.072116 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:24.078756 2152780 kubeadm.go:392] StartCluster: {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:24.078878 2152780 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:24.078950 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.122078 2152780 cri.go:89] found id: ""
	I0819 13:20:24.122170 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:24.133909 2152780 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:24.133932 2152780 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:24.133989 2152780 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:24.144440 2152780 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:24.145449 2152780 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-620467" does not appear in /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:24.146113 2152780 kubeconfig.go:62] /home/jenkins/minikube-integration/19479-2086935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-620467" cluster setting kubeconfig missing "old-k8s-version-620467" context setting]
	I0819 13:20:24.147113 2152780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:24.226327 2152780 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:24.236890 2152780 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.238
	I0819 13:20:24.236922 2152780 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:24.236936 2152780 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:24.236978 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.274144 2152780 cri.go:89] found id: ""
	I0819 13:20:24.274220 2152780 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:24.291049 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:24.302678 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:24.302699 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:24.302746 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:24.312055 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:24.312122 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:24.321518 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:24.330507 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:24.330570 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:24.340192 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.349855 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:24.349914 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.359243 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:24.369020 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:24.369100 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:24.379184 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:24.389785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:24.524009 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.159266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.413924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.519529 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.635520 2152780 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:25.635627 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:21.790800 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:21.791370 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:21.791400 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:21.791324 2154333 retry.go:31] will retry after 1.360798876s: waiting for machine to come up
	I0819 13:20:23.153492 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:23.153913 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:23.153942 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:23.153859 2154333 retry.go:31] will retry after 2.180186803s: waiting for machine to come up
	I0819 13:20:25.337305 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:25.337871 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:25.337903 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:25.337719 2154333 retry.go:31] will retry after 1.933341809s: waiting for machine to come up
	I0819 13:20:24.846640 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:27.344995 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:26.135764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:26.636618 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.136024 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.636046 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.135975 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.636411 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.136478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.636585 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.136183 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.636717 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.273276 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:27.273766 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:27.273796 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:27.273706 2154333 retry.go:31] will retry after 3.242792205s: waiting for machine to come up
	I0819 13:20:30.520205 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:30.520644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:30.520679 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:30.520585 2154333 retry.go:31] will retry after 2.921954134s: waiting for machine to come up
	I0819 13:20:29.843436 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:31.843702 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.021186 2151869 start.go:364] duration metric: took 51.034331815s to acquireMachinesLock for "no-preload-441627"
	I0819 13:20:35.021266 2151869 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:35.021275 2151869 fix.go:54] fixHost starting: 
	I0819 13:20:35.021747 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:35.021788 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:35.039025 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0819 13:20:35.039480 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:35.039951 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:20:35.039978 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:35.040398 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:35.040610 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:35.040779 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:20:35.042642 2151869 fix.go:112] recreateIfNeeded on no-preload-441627: state=Stopped err=<nil>
	I0819 13:20:35.042670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	W0819 13:20:35.042852 2151869 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:35.044620 2151869 out.go:177] * Restarting existing kvm2 VM for "no-preload-441627" ...
	I0819 13:20:31.136686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:31.636239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.136372 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.635927 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.136039 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.635721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.136156 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.636069 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.136557 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.636478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.446149 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.446796 2153523 main.go:141] libmachine: (embed-certs-823470) Found IP for machine: 192.168.61.50
	I0819 13:20:33.446825 2153523 main.go:141] libmachine: (embed-certs-823470) Reserving static IP address...
	I0819 13:20:33.446843 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has current primary IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.447326 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.447357 2153523 main.go:141] libmachine: (embed-certs-823470) Reserved static IP address: 192.168.61.50
	I0819 13:20:33.447375 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | skip adding static IP to network mk-embed-certs-823470 - found existing host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"}
	I0819 13:20:33.447390 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Getting to WaitForSSH function...
	I0819 13:20:33.447402 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting for SSH to be available...
	I0819 13:20:33.449644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.449934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.449966 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.450063 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH client type: external
	I0819 13:20:33.450080 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa (-rw-------)
	I0819 13:20:33.450128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:33.450148 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | About to run SSH command:
	I0819 13:20:33.450160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | exit 0
	I0819 13:20:33.580225 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:33.580660 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetConfigRaw
	I0819 13:20:33.581310 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.583950 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584345 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.584382 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584697 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:20:33.584932 2153523 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:33.584961 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:33.585138 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.587429 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587735 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.587768 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587903 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.588123 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588333 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588507 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.588675 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.588873 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.588886 2153523 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:33.696497 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:33.696528 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696770 2153523 buildroot.go:166] provisioning hostname "embed-certs-823470"
	I0819 13:20:33.696788 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.699605 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.699955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.699991 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.700129 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.700374 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700531 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700668 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.700822 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.701007 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.701021 2153523 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-823470 && echo "embed-certs-823470" | sudo tee /etc/hostname
	I0819 13:20:33.826398 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-823470
	
	I0819 13:20:33.826444 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.829353 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.829751 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.829791 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.830030 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.830215 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830383 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830504 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.830662 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.830879 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.830896 2153523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-823470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-823470/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-823470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:33.952843 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:33.952881 2153523 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:33.952908 2153523 buildroot.go:174] setting up certificates
	I0819 13:20:33.952920 2153523 provision.go:84] configureAuth start
	I0819 13:20:33.952930 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.953214 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.955916 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956302 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.956332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956520 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.958787 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959101 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.959128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959257 2153523 provision.go:143] copyHostCerts
	I0819 13:20:33.959321 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:33.959341 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:33.959391 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:33.959505 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:33.959516 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:33.959537 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:33.959593 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:33.959600 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:33.959618 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:33.959670 2153523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.embed-certs-823470 san=[127.0.0.1 192.168.61.50 embed-certs-823470 localhost minikube]
	I0819 13:20:34.309596 2153523 provision.go:177] copyRemoteCerts
	I0819 13:20:34.309658 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:34.309693 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.312341 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312722 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.312757 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312897 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.313100 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.313221 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.313349 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.400305 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:34.423939 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:34.447427 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:34.474337 2153523 provision.go:87] duration metric: took 521.386141ms to configureAuth
	I0819 13:20:34.474374 2153523 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:34.474559 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:34.474655 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.477282 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477721 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.477754 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.478108 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478296 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478443 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.478661 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.478864 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.478886 2153523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:34.768186 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:34.768232 2153523 machine.go:96] duration metric: took 1.183282981s to provisionDockerMachine
	I0819 13:20:34.768244 2153523 start.go:293] postStartSetup for "embed-certs-823470" (driver="kvm2")
	I0819 13:20:34.768255 2153523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:34.768273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:34.768593 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:34.768626 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.771022 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771451 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.771485 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771636 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.771820 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.771979 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.772095 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.862868 2153523 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:34.867619 2153523 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:34.867659 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:34.867746 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:34.867837 2153523 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:34.867931 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:34.877409 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:34.907227 2153523 start.go:296] duration metric: took 138.963836ms for postStartSetup
	I0819 13:20:34.907285 2153523 fix.go:56] duration metric: took 19.202241044s for fixHost
	I0819 13:20:34.907315 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.909900 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910238 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.910273 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910466 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.910671 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910832 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910952 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.911132 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.911306 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.911316 2153523 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:35.020989 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073634.971972974
	
	I0819 13:20:35.021036 2153523 fix.go:216] guest clock: 1724073634.971972974
	I0819 13:20:35.021046 2153523 fix.go:229] Guest: 2024-08-19 13:20:34.971972974 +0000 UTC Remote: 2024-08-19 13:20:34.907292264 +0000 UTC m=+174.021390970 (delta=64.68071ms)
	I0819 13:20:35.021076 2153523 fix.go:200] guest clock delta is within tolerance: 64.68071ms
	I0819 13:20:35.021082 2153523 start.go:83] releasing machines lock for "embed-certs-823470", held for 19.316080353s
	I0819 13:20:35.021111 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.021414 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:35.024146 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024613 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.024651 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024793 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025291 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025478 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025561 2153523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:35.025612 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.025736 2153523 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:35.025759 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.028550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028973 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029003 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029031 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029062 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029272 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029359 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029551 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029597 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029723 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.029741 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029893 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.137494 2153523 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:35.145726 2153523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:35.296420 2153523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:35.303481 2153523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:35.303566 2153523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:35.323781 2153523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:35.323813 2153523 start.go:495] detecting cgroup driver to use...
	I0819 13:20:35.323893 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:35.343648 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:35.357864 2153523 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:35.357920 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:35.371911 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:35.388107 2153523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:35.496871 2153523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:35.635989 2153523 docker.go:233] disabling docker service ...
	I0819 13:20:35.636061 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:35.651531 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:35.665374 2153523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:35.810969 2153523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:35.045992 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Start
	I0819 13:20:35.046198 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring networks are active...
	I0819 13:20:35.046943 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network default is active
	I0819 13:20:35.047261 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network mk-no-preload-441627 is active
	I0819 13:20:35.047846 2151869 main.go:141] libmachine: (no-preload-441627) Getting domain xml...
	I0819 13:20:35.048719 2151869 main.go:141] libmachine: (no-preload-441627) Creating domain...
	I0819 13:20:36.329569 2151869 main.go:141] libmachine: (no-preload-441627) Waiting to get IP...
	I0819 13:20:36.330479 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.330862 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.330953 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.330842 2154491 retry.go:31] will retry after 214.344142ms: waiting for machine to come up
	I0819 13:20:35.931109 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:35.945746 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:35.963978 2153523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:35.964062 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.973860 2153523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:35.973944 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.983833 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.993733 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.003724 2153523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:36.014327 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.024944 2153523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.043960 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.054444 2153523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:36.064239 2153523 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:36.064325 2153523 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:36.082508 2153523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:36.092677 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:36.216592 2153523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:36.368516 2153523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:36.368599 2153523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:36.373579 2153523 start.go:563] Will wait 60s for crictl version
	I0819 13:20:36.373630 2153523 ssh_runner.go:195] Run: which crictl
	I0819 13:20:36.377433 2153523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:36.416842 2153523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:36.416933 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.445307 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.480522 2153523 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:33.845335 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.847273 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:36.135939 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.136635 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.635748 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.136679 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.636322 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.136560 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.635772 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.135797 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.635764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.481770 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:36.485210 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485665 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:36.485697 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485956 2153523 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:36.490383 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:36.504020 2153523 kubeadm.go:883] updating cluster {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:36.504151 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:36.504236 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:36.540784 2153523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:36.540871 2153523 ssh_runner.go:195] Run: which lz4
	I0819 13:20:36.545693 2153523 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:36.550278 2153523 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:36.550307 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:20:38.008380 2153523 crio.go:462] duration metric: took 1.462725002s to copy over tarball
	I0819 13:20:38.008468 2153523 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:40.267095 2153523 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258591648s)
	I0819 13:20:40.267126 2153523 crio.go:469] duration metric: took 2.258713555s to extract the tarball
	I0819 13:20:40.267133 2153523 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:40.307080 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:40.350343 2153523 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:40.350368 2153523 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:40.350377 2153523 kubeadm.go:934] updating node { 192.168.61.50 8443 v1.31.0 crio true true} ...
	I0819 13:20:40.350504 2153523 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-823470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:40.350594 2153523 ssh_runner.go:195] Run: crio config
	I0819 13:20:40.397534 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:40.397560 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:40.397582 2153523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:40.397612 2153523 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.50 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-823470 NodeName:embed-certs-823470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:40.397804 2153523 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-823470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:40.397880 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:40.409664 2153523 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:40.409738 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:40.420526 2153523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:20:40.437016 2153523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:40.454222 2153523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 13:20:40.471532 2153523 ssh_runner.go:195] Run: grep 192.168.61.50	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:40.475436 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:40.487570 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:40.612612 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:40.630016 2153523 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470 for IP: 192.168.61.50
	I0819 13:20:40.630049 2153523 certs.go:194] generating shared ca certs ...
	I0819 13:20:40.630073 2153523 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:40.630265 2153523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:40.630329 2153523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:40.630343 2153523 certs.go:256] generating profile certs ...
	I0819 13:20:40.630457 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/client.key
	I0819 13:20:40.630553 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key.208ee791
	I0819 13:20:40.630606 2153523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key
	I0819 13:20:40.630756 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:40.630812 2153523 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:40.630824 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:40.630859 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:40.630891 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:40.630915 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:40.630959 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:40.631651 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:40.660149 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:40.699120 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:40.726336 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:40.765233 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 13:20:40.804970 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 13:20:40.846737 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:40.871358 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:40.896688 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:40.920071 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:36.547335 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.547819 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.547855 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.547769 2154491 retry.go:31] will retry after 344.710336ms: waiting for machine to come up
	I0819 13:20:36.894655 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.895213 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.895249 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.895169 2154491 retry.go:31] will retry after 301.307077ms: waiting for machine to come up
	I0819 13:20:37.197664 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.198250 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.198286 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.198182 2154491 retry.go:31] will retry after 516.927721ms: waiting for machine to come up
	I0819 13:20:37.716831 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.717323 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.717359 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.717279 2154491 retry.go:31] will retry after 679.982986ms: waiting for machine to come up
	I0819 13:20:38.398678 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:38.399120 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:38.399157 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:38.399072 2154491 retry.go:31] will retry after 822.30179ms: waiting for machine to come up
	I0819 13:20:39.223045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:39.223575 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:39.223609 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:39.223519 2154491 retry.go:31] will retry after 936.040478ms: waiting for machine to come up
	I0819 13:20:40.161047 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:40.161593 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:40.161624 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:40.161538 2154491 retry.go:31] will retry after 1.301988289s: waiting for machine to come up
	I0819 13:20:38.347577 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:40.845179 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:42.845598 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:41.136326 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:41.635949 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.136274 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.636124 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.135998 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.636101 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.136242 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.635713 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.135796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.635702 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.943162 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:40.969648 2153523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:40.988367 2153523 ssh_runner.go:195] Run: openssl version
	I0819 13:20:40.994353 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:41.006057 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010833 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010908 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.016665 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:41.029047 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:41.040384 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044690 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044755 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.050358 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:41.061626 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:41.073480 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077867 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077941 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.083685 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:41.096051 2153523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:41.100589 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:41.106791 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:41.112757 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:41.119083 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:41.125060 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:41.130528 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:41.136615 2153523 kubeadm.go:392] StartCluster: {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:41.136781 2153523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:41.136848 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.184735 2153523 cri.go:89] found id: ""
	I0819 13:20:41.184820 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:41.195134 2153523 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:41.195162 2153523 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:41.195215 2153523 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:41.205105 2153523 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:41.206197 2153523 kubeconfig.go:125] found "embed-certs-823470" server: "https://192.168.61.50:8443"
	I0819 13:20:41.208434 2153523 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:41.218028 2153523 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.50
	I0819 13:20:41.218059 2153523 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:41.218073 2153523 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:41.218128 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.256254 2153523 cri.go:89] found id: ""
	I0819 13:20:41.256344 2153523 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:41.273223 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:41.283149 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:41.283169 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:41.283228 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:41.292541 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:41.292624 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:41.302453 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:41.312032 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:41.312090 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:41.322411 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.331738 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:41.331817 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.342148 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:41.353410 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:41.353506 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:41.365035 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:41.378556 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:41.502298 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.511293 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.008951961s)
	I0819 13:20:42.511332 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.728318 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.806475 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.889810 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:42.889917 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.390790 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.890775 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.939414 2153523 api_server.go:72] duration metric: took 1.049613567s to wait for apiserver process to appear ...
	I0819 13:20:43.939456 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:43.939495 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:43.940151 2153523 api_server.go:269] stopped: https://192.168.61.50:8443/healthz: Get "https://192.168.61.50:8443/healthz": dial tcp 192.168.61.50:8443: connect: connection refused
	I0819 13:20:44.439965 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:41.465077 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:41.465533 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:41.465577 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:41.465512 2154491 retry.go:31] will retry after 1.34269663s: waiting for machine to come up
	I0819 13:20:42.810485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:42.810988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:42.811017 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:42.810935 2154491 retry.go:31] will retry after 2.118455014s: waiting for machine to come up
	I0819 13:20:44.930721 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:44.931183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:44.931225 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:44.931151 2154491 retry.go:31] will retry after 2.224941461s: waiting for machine to come up
	I0819 13:20:44.846237 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:47.344303 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:46.957397 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.957427 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:46.957459 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:46.997895 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.997932 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:47.440580 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.444945 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.444984 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:47.939553 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.946543 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.946576 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.440206 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.446075 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.446114 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.939602 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.944480 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.944526 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.440053 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.444461 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.444502 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.940036 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.944425 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.944464 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:50.440006 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:50.445299 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:20:50.451900 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:50.451930 2153523 api_server.go:131] duration metric: took 6.512455813s to wait for apiserver health ...
	I0819 13:20:50.451941 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:50.451948 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:50.453835 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:46.135706 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:46.635815 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.136558 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.636410 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.136300 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.636143 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.136463 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.635682 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.135692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.635841 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.454977 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:50.466694 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:50.484966 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:50.507697 2153523 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:50.507738 2153523 system_pods.go:61] "coredns-6f6b679f8f-rxjvv" [7eae7e32-b4c3-4941-9241-16cdde1007fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:50.507750 2153523 system_pods.go:61] "etcd-embed-certs-823470" [e2684e0e-8c4a-40da-ae5b-aa7f3a077e45] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:50.507761 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [c438fbda-c1f4-49f8-8979-1194d91cdeab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:50.507770 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [528e4b3e-b3fd-4455-bb4f-79b4bbdcb434] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:50.507778 2153523 system_pods.go:61] "kube-proxy-jcnwd" [b5302230-969d-409e-8aba-98ff971e9e79] Running
	I0819 13:20:50.507787 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [213067c8-73c6-4eb8-b9dc-f4ea981421f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:50.507798 2153523 system_pods.go:61] "metrics-server-6867b74b74-8dwpf" [7e3ab377-075e-48d8-a9ce-191ba41e3ed7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:50.507812 2153523 system_pods.go:61] "storage-provisioner" [4dd080d9-174f-4c0c-9b06-39e98a71672b] Running
	I0819 13:20:50.507821 2153523 system_pods.go:74] duration metric: took 22.833545ms to wait for pod list to return data ...
	I0819 13:20:50.507831 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:50.512822 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:50.512850 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:50.512862 2153523 node_conditions.go:105] duration metric: took 5.023718ms to run NodePressure ...
	I0819 13:20:50.512877 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:50.853595 2153523 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859172 2153523 kubeadm.go:739] kubelet initialised
	I0819 13:20:50.859200 2153523 kubeadm.go:740] duration metric: took 5.576224ms waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859211 2153523 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:50.865571 2153523 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.871631 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871652 2153523 pod_ready.go:82] duration metric: took 6.059239ms for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.871660 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871666 2153523 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.875242 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875268 2153523 pod_ready.go:82] duration metric: took 3.594665ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.875279 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875287 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.879429 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879460 2153523 pod_ready.go:82] duration metric: took 4.156713ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.879471 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879485 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.887625 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887647 2153523 pod_ready.go:82] duration metric: took 8.149859ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.887655 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887662 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:47.157196 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:47.157589 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:47.157616 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:47.157532 2154491 retry.go:31] will retry after 3.412547504s: waiting for machine to come up
	I0819 13:20:50.571896 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:50.572460 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:50.572485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:50.572401 2154491 retry.go:31] will retry after 3.498419112s: waiting for machine to come up
	I0819 13:20:49.345181 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.346188 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.136307 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.136425 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.636034 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.136443 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.636129 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.135695 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.635860 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.136481 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.635655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.287880 2153523 pod_ready.go:93] pod "kube-proxy-jcnwd" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:51.287908 2153523 pod_ready.go:82] duration metric: took 400.238614ms for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:51.287922 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:53.294631 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.803650 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:54.073302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073795 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has current primary IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073819 2151869 main.go:141] libmachine: (no-preload-441627) Found IP for machine: 192.168.50.161
	I0819 13:20:54.073833 2151869 main.go:141] libmachine: (no-preload-441627) Reserving static IP address...
	I0819 13:20:54.074302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.074337 2151869 main.go:141] libmachine: (no-preload-441627) Reserved static IP address: 192.168.50.161
	I0819 13:20:54.074356 2151869 main.go:141] libmachine: (no-preload-441627) DBG | skip adding static IP to network mk-no-preload-441627 - found existing host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"}
	I0819 13:20:54.074382 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Getting to WaitForSSH function...
	I0819 13:20:54.074398 2151869 main.go:141] libmachine: (no-preload-441627) Waiting for SSH to be available...
	I0819 13:20:54.076312 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076686 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.076719 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076879 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH client type: external
	I0819 13:20:54.076920 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa (-rw-------)
	I0819 13:20:54.076961 2151869 main.go:141] libmachine: (no-preload-441627) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:54.076975 2151869 main.go:141] libmachine: (no-preload-441627) DBG | About to run SSH command:
	I0819 13:20:54.076988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | exit 0
	I0819 13:20:54.208334 2151869 main.go:141] libmachine: (no-preload-441627) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:54.208710 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetConfigRaw
	I0819 13:20:54.209373 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.211827 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212176 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.212210 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212409 2151869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/config.json ...
	I0819 13:20:54.212587 2151869 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:54.212607 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:54.212810 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.214965 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215290 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.215315 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215435 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.215615 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215735 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215887 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.216050 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.216244 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.216256 2151869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:54.332379 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:54.332411 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332718 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:20:54.332753 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332948 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.335520 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.335930 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.335959 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.336119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.336357 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336513 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336654 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.336918 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.337084 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.337104 2151869 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-441627 && echo "no-preload-441627" | sudo tee /etc/hostname
	I0819 13:20:54.466127 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-441627
	
	I0819 13:20:54.466178 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.469045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469434 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.469482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469648 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.469836 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470018 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470174 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.470406 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.470654 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.470676 2151869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-441627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-441627/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-441627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:54.595063 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:54.595111 2151869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:54.595162 2151869 buildroot.go:174] setting up certificates
	I0819 13:20:54.595178 2151869 provision.go:84] configureAuth start
	I0819 13:20:54.595191 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.595554 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.598703 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599031 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.599060 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599202 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.601785 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602203 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.602231 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602593 2151869 provision.go:143] copyHostCerts
	I0819 13:20:54.602661 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:54.602680 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:54.602745 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:54.602878 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:54.602892 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:54.602922 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:54.603010 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:54.603022 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:54.603043 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:54.603116 2151869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.no-preload-441627 san=[127.0.0.1 192.168.50.161 localhost minikube no-preload-441627]
	I0819 13:20:55.027338 2151869 provision.go:177] copyRemoteCerts
	I0819 13:20:55.027408 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:55.027438 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.030073 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.030550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030826 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.031038 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.031231 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.031436 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.122187 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:55.147222 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:55.170341 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:55.195484 2151869 provision.go:87] duration metric: took 600.289644ms to configureAuth
	I0819 13:20:55.195516 2151869 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:55.195757 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:55.195862 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.198859 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199267 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.199294 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199448 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.199666 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199834 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199941 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.200126 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.200340 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.200357 2151869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:55.485890 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:55.485920 2151869 machine.go:96] duration metric: took 1.273319956s to provisionDockerMachine
	I0819 13:20:55.485931 2151869 start.go:293] postStartSetup for "no-preload-441627" (driver="kvm2")
	I0819 13:20:55.485941 2151869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:55.485957 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.486340 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:55.486379 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.489099 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489514 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.489550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489657 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.489866 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.490073 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.490298 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.582461 2151869 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:55.587827 2151869 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:55.587857 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:55.587928 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:55.588015 2151869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:55.588115 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:55.600309 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:55.629229 2151869 start.go:296] duration metric: took 143.280186ms for postStartSetup
	I0819 13:20:55.629281 2151869 fix.go:56] duration metric: took 20.608007215s for fixHost
	I0819 13:20:55.629306 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.632299 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632733 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.632786 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632921 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.633155 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633422 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633640 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.633850 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.634098 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.634117 2151869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:55.752998 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073655.727816487
	
	I0819 13:20:55.753028 2151869 fix.go:216] guest clock: 1724073655.727816487
	I0819 13:20:55.753039 2151869 fix.go:229] Guest: 2024-08-19 13:20:55.727816487 +0000 UTC Remote: 2024-08-19 13:20:55.629285878 +0000 UTC m=+354.230914724 (delta=98.530609ms)
	I0819 13:20:55.753080 2151869 fix.go:200] guest clock delta is within tolerance: 98.530609ms
	I0819 13:20:55.753087 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 20.731851858s
	I0819 13:20:55.753119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.753381 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:55.756443 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.756954 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.756986 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.757124 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757639 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757822 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757927 2151869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:55.757999 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.758066 2151869 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:55.758098 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.760779 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761023 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761162 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761338 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761459 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761515 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761756 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.761845 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761895 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.762041 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.762192 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.846241 2151869 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:55.868048 2151869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:56.013592 2151869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:56.020322 2151869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:56.020403 2151869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:56.045197 2151869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:56.045234 2151869 start.go:495] detecting cgroup driver to use...
	I0819 13:20:56.045325 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:56.062336 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:56.078399 2151869 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:56.078476 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:56.094735 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:56.110097 2151869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:56.238433 2151869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:56.405540 2151869 docker.go:233] disabling docker service ...
	I0819 13:20:56.405638 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:56.421604 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:56.434166 2151869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:56.565116 2151869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:56.688471 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:56.709293 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:56.727575 2151869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:56.727658 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.738492 2151869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:56.738559 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.748563 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.758372 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.768341 2151869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:56.778351 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.788072 2151869 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.805215 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.815496 2151869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:56.825033 2151869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:56.825087 2151869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:56.839732 2151869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:56.851166 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:56.976815 2151869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:57.154020 2151869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:57.154114 2151869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:57.159214 2151869 start.go:563] Will wait 60s for crictl version
	I0819 13:20:57.159282 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.163467 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:57.209305 2151869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:57.209417 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.239241 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.269905 2151869 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:53.844042 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.845906 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:56.135714 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:56.635996 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.135847 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.635970 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.136650 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.636316 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.136506 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.636057 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.136545 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.636030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.297660 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:57.297685 2153523 pod_ready.go:82] duration metric: took 6.009754619s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:57.297696 2153523 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:59.304626 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:57.270865 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:57.273851 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274158 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:57.274186 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274425 2151869 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:57.278843 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:57.296818 2151869 kubeadm.go:883] updating cluster {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:57.296988 2151869 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:57.297042 2151869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:57.345101 2151869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:57.345130 2151869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:57.345185 2151869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.345236 2151869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.345463 2151869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.345553 2151869 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.345586 2151869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.345482 2151869 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 13:20:57.345492 2151869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.345496 2151869 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.347612 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.347625 2151869 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.347661 2151869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.347698 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.347797 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.348298 2151869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.348418 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.348485 2151869 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 13:20:57.521291 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.528953 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.530354 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.538749 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.539434 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.564916 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 13:20:57.572522 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.596418 2151869 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 13:20:57.596483 2151869 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.596536 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.632436 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 13:20:57.690627 2151869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.690666 2151869 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 13:20:57.690725 2151869 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.690677 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690776 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 13:20:57.690827 2151869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.690856 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.711488 2151869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 13:20:57.711546 2151869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.711597 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842702 2151869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 13:20:57.842794 2151869 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 13:20:57.842819 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.842815 2151869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.842845 2151869 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.842917 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842923 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842933 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.842769 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.863031 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.959879 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.959965 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.960047 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.960061 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.960110 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.960298 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.994432 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.109438 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.124785 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:58.124845 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:58.126704 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:58.126791 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:58.126801 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:58.184360 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.198177 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.280314 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 13:20:58.280424 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.280471 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 13:20:58.280579 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:20:58.291181 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 13:20:58.291246 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 13:20:58.291278 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:20:58.291295 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 13:20:58.291333 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:20:58.291370 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:20:58.329433 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 13:20:58.329497 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 13:20:58.329553 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:20:58.329588 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 13:20:58.329597 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:20:58.329602 2151869 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329640 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329660 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 13:20:58.329679 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 13:20:58.329704 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 13:20:58.329747 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 13:20:58.333814 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 13:21:00.287189 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.957515939s)
	I0819 13:21:00.287247 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 13:21:00.287270 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287208 2151869 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.957596318s)
	I0819 13:21:00.287323 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287327 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 13:20:58.346708 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:00.844688 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.845156 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:01.136475 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.136128 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.636458 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.136471 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.636080 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.135958 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.636494 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.136279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.636642 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.803995 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:03.804110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:05.805252 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.037910 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.750558372s)
	I0819 13:21:02.037948 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 13:21:02.037977 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:02.038031 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:04.606941 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.568877079s)
	I0819 13:21:04.606974 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 13:21:04.607003 2151869 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:04.607057 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:05.346854 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.108643 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:06.136247 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:06.636648 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.136212 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.636710 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.136112 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.636655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.136349 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.635778 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.136689 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.635931 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.305329 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:10.805529 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.376612 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.769523227s)
	I0819 13:21:08.376661 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 13:21:08.376690 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:08.376749 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:10.337756 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.960979615s)
	I0819 13:21:10.337805 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 13:21:10.337835 2151869 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.337888 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.982386 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 13:21:10.982434 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.982483 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.345178 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.345444 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:11.135937 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:11.636273 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.136428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.635907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.135893 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.636189 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.136600 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.636483 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.135856 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.635957 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.303597 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:15.804612 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.334483 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351971561s)
	I0819 13:21:12.334524 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 13:21:12.334561 2151869 cache_images.go:123] Successfully loaded all cached images
	I0819 13:21:12.334568 2151869 cache_images.go:92] duration metric: took 14.989422621s to LoadCachedImages
	I0819 13:21:12.334580 2151869 kubeadm.go:934] updating node { 192.168.50.161 8443 v1.31.0 crio true true} ...
	I0819 13:21:12.334746 2151869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-441627 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:21:12.334860 2151869 ssh_runner.go:195] Run: crio config
	I0819 13:21:12.396639 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:12.396662 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:12.396674 2151869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:21:12.396703 2151869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.161 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-441627 NodeName:no-preload-441627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:21:12.396877 2151869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-441627"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:21:12.396958 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:21:12.408527 2151869 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:21:12.408601 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:21:12.418596 2151869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:21:12.435620 2151869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:21:12.452829 2151869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 13:21:12.470756 2151869 ssh_runner.go:195] Run: grep 192.168.50.161	control-plane.minikube.internal$ /etc/hosts
	I0819 13:21:12.474802 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:21:12.486795 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:21:12.612626 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:21:12.639005 2151869 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627 for IP: 192.168.50.161
	I0819 13:21:12.639027 2151869 certs.go:194] generating shared ca certs ...
	I0819 13:21:12.639051 2151869 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:21:12.639231 2151869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:21:12.639296 2151869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:21:12.639310 2151869 certs.go:256] generating profile certs ...
	I0819 13:21:12.639441 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/client.key
	I0819 13:21:12.639535 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key.72b03406
	I0819 13:21:12.639591 2151869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key
	I0819 13:21:12.639752 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:21:12.639811 2151869 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:21:12.639822 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:21:12.639849 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:21:12.639874 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:21:12.639899 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:21:12.639944 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:21:12.640741 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:21:12.674749 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:21:12.700466 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:21:12.726553 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:21:12.753171 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 13:21:12.779864 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:21:12.817715 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:21:12.841844 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:21:12.867122 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:21:12.896767 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:21:12.921671 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:21:12.948478 2151869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:21:12.967603 2151869 ssh_runner.go:195] Run: openssl version
	I0819 13:21:12.973848 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:21:12.984584 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989193 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989248 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.995006 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:21:13.005510 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:21:13.017919 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022858 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022918 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.029040 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:21:13.039654 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:21:13.049968 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054446 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054502 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.060941 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:21:13.071954 2151869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:21:13.076513 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:21:13.082624 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:21:13.088523 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:21:13.094597 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:21:13.100589 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:21:13.106674 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:21:13.112637 2151869 kubeadm.go:392] StartCluster: {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:21:13.112759 2151869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:21:13.112804 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.153143 2151869 cri.go:89] found id: ""
	I0819 13:21:13.153230 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:21:13.164365 2151869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:21:13.164389 2151869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:21:13.164444 2151869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:21:13.174609 2151869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:21:13.175874 2151869 kubeconfig.go:125] found "no-preload-441627" server: "https://192.168.50.161:8443"
	I0819 13:21:13.178228 2151869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:21:13.188495 2151869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.161
	I0819 13:21:13.188536 2151869 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:21:13.188562 2151869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:21:13.188623 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.225547 2151869 cri.go:89] found id: ""
	I0819 13:21:13.225635 2151869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:21:13.242484 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:21:13.251983 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:21:13.252021 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:21:13.252085 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:21:13.260912 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:21:13.260984 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:21:13.270535 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:21:13.279500 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:21:13.279549 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:21:13.288906 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.297861 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:21:13.297914 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.307468 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:21:13.316801 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:21:13.316859 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:21:13.325756 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:21:13.336403 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:13.448374 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.613650 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.165234261s)
	I0819 13:21:14.613681 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.841371 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.904812 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:15.017425 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:21:15.017541 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.518613 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.018036 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.061238 2151869 api_server.go:72] duration metric: took 1.043828546s to wait for apiserver process to appear ...
	I0819 13:21:16.061271 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:21:16.061301 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:14.846330 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:17.344355 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:19.219563 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:21:19.219607 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:21:19.219627 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.350613 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.350647 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:19.562010 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.567205 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.567236 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.062225 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.085272 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:20.085311 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.561434 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.565983 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:21:20.574754 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:21:20.574779 2151869 api_server.go:131] duration metric: took 4.513500352s to wait for apiserver health ...
	I0819 13:21:20.574790 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:20.574797 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:20.576255 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:21:16.136030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.636437 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.135718 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.635876 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.136668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.636021 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.135933 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.636638 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.135892 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.636450 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.804980 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.306777 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.577369 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:21:20.603169 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:21:20.642872 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:21:20.657328 2151869 system_pods.go:59] 8 kube-system pods found
	I0819 13:21:20.657366 2151869 system_pods.go:61] "coredns-6f6b679f8f-4w764" [486b4748-9a70-47b1-b386-e5369ebc2a96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:21:20.657373 2151869 system_pods.go:61] "etcd-no-preload-441627" [8013da47-1ecb-47de-a82c-7b72529b68fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:21:20.657383 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [72190b64-c312-48c0-8494-27e1ad39d877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:21:20.657389 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [60773a38-952f-4f53-a3f3-e54acdf7d196] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:21:20.657395 2151869 system_pods.go:61] "kube-proxy-v76kp" [ac6ad27b-5ee4-4899-8744-82537285a195] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:21:20.657400 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [78dde452-d8d5-498d-a6b1-459e6cb9ac16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:21:20.657405 2151869 system_pods.go:61] "metrics-server-6867b74b74-hd777" [2ce49c7d-af76-459a-82f5-aaa8d2b97c60] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:21:20.657410 2151869 system_pods.go:61] "storage-provisioner" [b67a7c83-46d3-4d75-bfa2-ae944b0c0ab8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:21:20.657417 2151869 system_pods.go:74] duration metric: took 14.518946ms to wait for pod list to return data ...
	I0819 13:21:20.657424 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:21:20.661337 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:21:20.661362 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:21:20.661373 2151869 node_conditions.go:105] duration metric: took 3.944656ms to run NodePressure ...
	I0819 13:21:20.661435 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:20.951411 2151869 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955734 2151869 kubeadm.go:739] kubelet initialised
	I0819 13:21:20.955755 2151869 kubeadm.go:740] duration metric: took 4.317107ms waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955763 2151869 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:21:20.961038 2151869 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.965450 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965473 2151869 pod_ready.go:82] duration metric: took 4.406672ms for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.965481 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965488 2151869 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.969586 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969610 2151869 pod_ready.go:82] duration metric: took 4.112423ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.969620 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969626 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.973542 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973566 2151869 pod_ready.go:82] duration metric: took 3.932252ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.973575 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973583 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.046350 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046390 2151869 pod_ready.go:82] duration metric: took 72.796575ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:21.046405 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046414 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:19.346285 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.346824 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.136149 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:21.635766 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.136148 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.135942 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.636570 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.135982 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.635967 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.136750 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.635977 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:25.636070 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:25.675163 2152780 cri.go:89] found id: ""
	I0819 13:21:25.675196 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.675208 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:25.675216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:25.675294 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:25.711676 2152780 cri.go:89] found id: ""
	I0819 13:21:25.711702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.711711 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:25.711717 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:25.711777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:25.748448 2152780 cri.go:89] found id: ""
	I0819 13:21:25.748479 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.748490 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:25.748499 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:25.748574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:25.786555 2152780 cri.go:89] found id: ""
	I0819 13:21:25.786584 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.786597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:25.786606 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:25.786661 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:22.804004 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:24.804708 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.447200 2151869 pod_ready.go:93] pod "kube-proxy-v76kp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:21.447230 2151869 pod_ready.go:82] duration metric: took 400.803851ms for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.447241 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:23.452136 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.454487 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:23.845593 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:26.345971 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.819620 2152780 cri.go:89] found id: ""
	I0819 13:21:25.819654 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.819665 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:25.819672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:25.819734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:25.853303 2152780 cri.go:89] found id: ""
	I0819 13:21:25.853333 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.853342 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:25.853349 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:25.853401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:25.886285 2152780 cri.go:89] found id: ""
	I0819 13:21:25.886315 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.886325 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:25.886332 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:25.886389 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:25.921320 2152780 cri.go:89] found id: ""
	I0819 13:21:25.921347 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.921356 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:25.921367 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:25.921377 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:25.975400 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:25.975486 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:25.989715 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:25.989741 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:26.115378 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:26.115402 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:26.115415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:26.192370 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:26.192415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:28.739640 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:28.752652 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:28.752729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:28.791883 2152780 cri.go:89] found id: ""
	I0819 13:21:28.791910 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.791918 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:28.791925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:28.791977 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:28.829635 2152780 cri.go:89] found id: ""
	I0819 13:21:28.829668 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.829678 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:28.829686 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:28.829756 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:28.866768 2152780 cri.go:89] found id: ""
	I0819 13:21:28.866804 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.866817 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:28.866825 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:28.866897 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:28.901921 2152780 cri.go:89] found id: ""
	I0819 13:21:28.901954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.901963 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:28.901970 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:28.902025 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:28.945421 2152780 cri.go:89] found id: ""
	I0819 13:21:28.945449 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.945458 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:28.945468 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:28.945535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:28.982857 2152780 cri.go:89] found id: ""
	I0819 13:21:28.982893 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.982905 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:28.982918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:28.982978 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:29.020378 2152780 cri.go:89] found id: ""
	I0819 13:21:29.020415 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.020427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:29.020436 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:29.020502 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:29.055782 2152780 cri.go:89] found id: ""
	I0819 13:21:29.055817 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.055829 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:29.055841 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:29.055864 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:29.107722 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:29.107761 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:29.122304 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:29.122364 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:29.193379 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:29.193415 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:29.193435 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:29.271539 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:29.271578 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:27.304409 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.306462 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:27.952426 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.954179 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:28.346305 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:30.845538 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:31.819829 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:31.832354 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:31.832427 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:31.873620 2152780 cri.go:89] found id: ""
	I0819 13:21:31.873647 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.873656 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:31.873662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:31.873722 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:31.905857 2152780 cri.go:89] found id: ""
	I0819 13:21:31.905891 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.905903 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:31.905911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:31.905993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:31.942187 2152780 cri.go:89] found id: ""
	I0819 13:21:31.942215 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.942227 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:31.942235 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:31.942307 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:31.981439 2152780 cri.go:89] found id: ""
	I0819 13:21:31.981475 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.981486 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:31.981494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:31.981569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:32.019863 2152780 cri.go:89] found id: ""
	I0819 13:21:32.019900 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.019922 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:32.019934 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:32.020000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:32.055007 2152780 cri.go:89] found id: ""
	I0819 13:21:32.055035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.055044 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:32.055050 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:32.055117 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:32.092770 2152780 cri.go:89] found id: ""
	I0819 13:21:32.092809 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.092818 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:32.092824 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:32.092880 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:32.127000 2152780 cri.go:89] found id: ""
	I0819 13:21:32.127035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.127043 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:32.127053 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:32.127067 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:32.179979 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:32.180026 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:32.195377 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:32.195412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:32.269864 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:32.269887 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:32.269902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:32.348035 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:32.348072 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:34.886647 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:34.900410 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:34.900483 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:34.938558 2152780 cri.go:89] found id: ""
	I0819 13:21:34.938588 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.938596 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:34.938603 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:34.938660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:34.975968 2152780 cri.go:89] found id: ""
	I0819 13:21:34.976001 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.976012 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:34.976020 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:34.976105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:35.008967 2152780 cri.go:89] found id: ""
	I0819 13:21:35.008995 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.009003 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:35.009010 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:35.009079 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:35.045918 2152780 cri.go:89] found id: ""
	I0819 13:21:35.045957 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.045966 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:35.045973 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:35.046038 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:35.090605 2152780 cri.go:89] found id: ""
	I0819 13:21:35.090640 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.090652 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:35.090667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:35.090740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:35.152032 2152780 cri.go:89] found id: ""
	I0819 13:21:35.152066 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.152079 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:35.152088 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:35.152182 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:35.197900 2152780 cri.go:89] found id: ""
	I0819 13:21:35.197932 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.197947 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:35.197956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:35.198026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:35.233818 2152780 cri.go:89] found id: ""
	I0819 13:21:35.233853 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.233867 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:35.233884 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:35.233902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:35.316457 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:35.316501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:35.356449 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:35.356480 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:35.409307 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:35.409347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:35.423776 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:35.423807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:35.500868 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:31.803587 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:33.803736 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.805282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:32.452623 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:34.454347 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.454772 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:35.454809 2151869 pod_ready.go:82] duration metric: took 14.007559429s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:35.454826 2151869 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:33.345607 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.347672 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.844185 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:38.001981 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:38.014908 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:38.014991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:38.051110 2152780 cri.go:89] found id: ""
	I0819 13:21:38.051141 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.051154 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:38.051162 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:38.051236 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:38.087476 2152780 cri.go:89] found id: ""
	I0819 13:21:38.087502 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.087511 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:38.087517 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:38.087575 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:38.122929 2152780 cri.go:89] found id: ""
	I0819 13:21:38.122961 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.122972 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:38.122980 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:38.123050 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:38.157352 2152780 cri.go:89] found id: ""
	I0819 13:21:38.157382 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.157394 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:38.157403 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:38.157468 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:38.194914 2152780 cri.go:89] found id: ""
	I0819 13:21:38.194953 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.194968 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:38.194974 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:38.195028 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:38.234036 2152780 cri.go:89] found id: ""
	I0819 13:21:38.234069 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.234078 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:38.234087 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:38.234155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:38.275759 2152780 cri.go:89] found id: ""
	I0819 13:21:38.275787 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.275798 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:38.275806 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:38.275867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:38.312718 2152780 cri.go:89] found id: ""
	I0819 13:21:38.312749 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.312759 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:38.312770 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:38.312785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:38.352032 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:38.352057 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:38.401877 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:38.401912 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:38.415899 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:38.415929 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:38.495576 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:38.495596 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:38.495609 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:38.304885 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.804828 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.461711 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:39.960883 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.343811 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.344371 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:41.073796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:41.095862 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:41.095935 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:41.155760 2152780 cri.go:89] found id: ""
	I0819 13:21:41.155790 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.155820 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:41.155828 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:41.155899 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:41.200108 2152780 cri.go:89] found id: ""
	I0819 13:21:41.200147 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.200177 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:41.200189 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:41.200263 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:41.233462 2152780 cri.go:89] found id: ""
	I0819 13:21:41.233494 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.233506 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:41.233514 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:41.233585 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:41.267089 2152780 cri.go:89] found id: ""
	I0819 13:21:41.267120 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.267133 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:41.267141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:41.267212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:41.302391 2152780 cri.go:89] found id: ""
	I0819 13:21:41.302423 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.302434 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:41.302443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:41.302509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:41.337812 2152780 cri.go:89] found id: ""
	I0819 13:21:41.337843 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.337855 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:41.337864 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:41.337939 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:41.373924 2152780 cri.go:89] found id: ""
	I0819 13:21:41.373954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.373961 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:41.373968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:41.374023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:41.407905 2152780 cri.go:89] found id: ""
	I0819 13:21:41.407950 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.407961 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:41.407974 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:41.407996 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:41.493650 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:41.493693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:41.532411 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:41.532456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:41.584613 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:41.584650 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:41.598337 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:41.598369 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:41.672154 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.172662 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:44.185663 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:44.185730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:44.220825 2152780 cri.go:89] found id: ""
	I0819 13:21:44.220863 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.220876 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:44.220885 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:44.220955 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:44.257261 2152780 cri.go:89] found id: ""
	I0819 13:21:44.257292 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.257300 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:44.257307 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:44.257369 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:44.292652 2152780 cri.go:89] found id: ""
	I0819 13:21:44.292690 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.292701 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:44.292709 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:44.292806 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:44.330064 2152780 cri.go:89] found id: ""
	I0819 13:21:44.330096 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.330108 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:44.330117 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:44.330184 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:44.366863 2152780 cri.go:89] found id: ""
	I0819 13:21:44.366895 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.366906 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:44.366915 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:44.366976 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:44.400335 2152780 cri.go:89] found id: ""
	I0819 13:21:44.400372 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.400385 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:44.400394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:44.400458 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:44.434381 2152780 cri.go:89] found id: ""
	I0819 13:21:44.434411 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.434422 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:44.434430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:44.434498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:44.470985 2152780 cri.go:89] found id: ""
	I0819 13:21:44.471013 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.471024 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:44.471036 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:44.471048 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:44.521552 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:44.521593 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:44.536197 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:44.536230 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:44.613040 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.613063 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:44.613077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:44.694600 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:44.694655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:42.807847 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:45.304916 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.461974 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.464590 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.843926 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.845069 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:47.231770 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:47.245480 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:47.245555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:47.282302 2152780 cri.go:89] found id: ""
	I0819 13:21:47.282334 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.282343 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:47.282351 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:47.282424 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:47.317791 2152780 cri.go:89] found id: ""
	I0819 13:21:47.317828 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.317839 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:47.317847 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:47.317914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:47.355999 2152780 cri.go:89] found id: ""
	I0819 13:21:47.356033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.356044 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:47.356052 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:47.356113 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:47.394652 2152780 cri.go:89] found id: ""
	I0819 13:21:47.394689 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.394702 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:47.394711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:47.394769 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:47.428563 2152780 cri.go:89] found id: ""
	I0819 13:21:47.428599 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.428612 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:47.428621 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:47.428698 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:47.462669 2152780 cri.go:89] found id: ""
	I0819 13:21:47.462702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.462712 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:47.462719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:47.462789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:47.496322 2152780 cri.go:89] found id: ""
	I0819 13:21:47.496349 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.496357 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:47.496366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:47.496419 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:47.527997 2152780 cri.go:89] found id: ""
	I0819 13:21:47.528033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.528045 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:47.528058 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:47.528077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:47.541514 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:47.541542 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:47.612804 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:47.612833 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:47.612851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.692815 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:47.692857 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:47.731057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:47.731092 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.286289 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:50.299376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:50.299457 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:50.335093 2152780 cri.go:89] found id: ""
	I0819 13:21:50.335123 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.335132 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:50.335138 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:50.335212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:50.369353 2152780 cri.go:89] found id: ""
	I0819 13:21:50.369378 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.369386 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:50.369394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:50.369459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:50.404319 2152780 cri.go:89] found id: ""
	I0819 13:21:50.404350 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.404360 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:50.404366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:50.404421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:50.441033 2152780 cri.go:89] found id: ""
	I0819 13:21:50.441064 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.441073 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:50.441080 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:50.441136 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:50.477438 2152780 cri.go:89] found id: ""
	I0819 13:21:50.477463 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.477471 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:50.477485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:50.477541 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:50.511785 2152780 cri.go:89] found id: ""
	I0819 13:21:50.511825 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.511838 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:50.511848 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:50.511925 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:50.545454 2152780 cri.go:89] found id: ""
	I0819 13:21:50.545487 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.545496 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:50.545502 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:50.545563 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:50.582022 2152780 cri.go:89] found id: ""
	I0819 13:21:50.582053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.582061 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:50.582071 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:50.582086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:50.625353 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:50.625384 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.688814 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:50.688851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:50.704225 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:50.704254 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:50.782736 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:50.782771 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:50.782789 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.804084 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.804223 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.961189 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:48.962466 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.344808 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.347615 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.365075 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:53.380049 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:53.380135 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:53.418911 2152780 cri.go:89] found id: ""
	I0819 13:21:53.418944 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.418956 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:53.418964 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:53.419020 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:53.458952 2152780 cri.go:89] found id: ""
	I0819 13:21:53.458986 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.458997 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:53.459005 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:53.459080 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:53.497219 2152780 cri.go:89] found id: ""
	I0819 13:21:53.497254 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.497265 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:53.497273 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:53.497360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:53.533624 2152780 cri.go:89] found id: ""
	I0819 13:21:53.533658 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.533671 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:53.533682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:53.533761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:53.567453 2152780 cri.go:89] found id: ""
	I0819 13:21:53.567490 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.567501 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:53.567509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:53.567574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:53.603438 2152780 cri.go:89] found id: ""
	I0819 13:21:53.603472 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.603482 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:53.603490 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:53.603570 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:53.637608 2152780 cri.go:89] found id: ""
	I0819 13:21:53.637648 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.637659 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:53.637667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:53.637742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:53.684476 2152780 cri.go:89] found id: ""
	I0819 13:21:53.684514 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.684524 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:53.684534 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:53.684550 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:53.699195 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:53.699227 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:53.766980 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:53.767000 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:53.767013 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:53.844890 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:53.844924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:53.888284 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:53.888318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:51.805011 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.805710 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.460867 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.462350 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.962100 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.845091 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.845602 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:57.848944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:56.442668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:56.455936 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:56.456019 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:56.494363 2152780 cri.go:89] found id: ""
	I0819 13:21:56.494403 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.494414 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:56.494423 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:56.494498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:56.531901 2152780 cri.go:89] found id: ""
	I0819 13:21:56.531933 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.531943 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:56.531950 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:56.532046 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:56.569677 2152780 cri.go:89] found id: ""
	I0819 13:21:56.569706 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.569715 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:56.569721 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:56.569787 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:56.608109 2152780 cri.go:89] found id: ""
	I0819 13:21:56.608136 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.608147 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:56.608153 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:56.608240 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:56.646753 2152780 cri.go:89] found id: ""
	I0819 13:21:56.646791 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.646803 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:56.646810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:56.646879 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:56.686029 2152780 cri.go:89] found id: ""
	I0819 13:21:56.686053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.686061 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:56.686067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:56.686131 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:56.724838 2152780 cri.go:89] found id: ""
	I0819 13:21:56.724874 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.724886 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:56.724895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:56.724963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:56.762368 2152780 cri.go:89] found id: ""
	I0819 13:21:56.762399 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.762407 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:56.762422 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:56.762434 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:56.815047 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:56.815103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:56.830783 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:56.830809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:56.908963 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:56.908990 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:56.909008 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:56.991337 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:56.991372 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:59.532005 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:59.545733 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:59.545823 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:59.584082 2152780 cri.go:89] found id: ""
	I0819 13:21:59.584111 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.584121 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:59.584130 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:59.584216 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:59.618041 2152780 cri.go:89] found id: ""
	I0819 13:21:59.618075 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.618086 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:59.618094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:59.618171 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:59.651720 2152780 cri.go:89] found id: ""
	I0819 13:21:59.651753 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.651764 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:59.651772 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:59.651844 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:59.686447 2152780 cri.go:89] found id: ""
	I0819 13:21:59.686471 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.686479 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:59.686486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:59.686539 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:59.719583 2152780 cri.go:89] found id: ""
	I0819 13:21:59.719623 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.719635 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:59.719643 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:59.719714 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:59.753559 2152780 cri.go:89] found id: ""
	I0819 13:21:59.753593 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.753604 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:59.753613 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:59.753684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:59.789518 2152780 cri.go:89] found id: ""
	I0819 13:21:59.789554 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.789566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:59.789575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:59.789647 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:59.825772 2152780 cri.go:89] found id: ""
	I0819 13:21:59.825801 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.825812 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:59.825823 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:59.825837 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:59.878857 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:59.878899 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:59.894113 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:59.894146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:59.975274 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:59.975301 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:59.975318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:00.056121 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:00.056185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:56.304106 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.804232 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.805308 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.461819 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.961828 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.345176 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.345756 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.596366 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:02.610365 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:02.610439 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:02.646703 2152780 cri.go:89] found id: ""
	I0819 13:22:02.646729 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.646737 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:02.646743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:02.646800 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:02.682311 2152780 cri.go:89] found id: ""
	I0819 13:22:02.682345 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.682357 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:02.682364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:02.682441 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:02.718494 2152780 cri.go:89] found id: ""
	I0819 13:22:02.718525 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.718545 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:02.718551 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:02.718608 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.751952 2152780 cri.go:89] found id: ""
	I0819 13:22:02.751988 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.751999 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:02.752008 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:02.752074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:02.786156 2152780 cri.go:89] found id: ""
	I0819 13:22:02.786189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.786197 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:02.786204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:02.786261 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:02.823080 2152780 cri.go:89] found id: ""
	I0819 13:22:02.823114 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.823126 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:02.823134 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:02.823207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:02.860266 2152780 cri.go:89] found id: ""
	I0819 13:22:02.860299 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.860311 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:02.860320 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:02.860377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:02.903612 2152780 cri.go:89] found id: ""
	I0819 13:22:02.903643 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.903652 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:02.903667 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:02.903680 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:02.984771 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:02.984813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:03.027346 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:03.027378 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:03.077562 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:03.077608 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:03.092135 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:03.092190 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:03.168418 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:05.669468 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:05.682640 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:05.682706 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:05.717755 2152780 cri.go:89] found id: ""
	I0819 13:22:05.717785 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.717796 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:05.717805 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:05.717867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:05.757165 2152780 cri.go:89] found id: ""
	I0819 13:22:05.757191 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.757199 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:05.757204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:05.757265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:05.794991 2152780 cri.go:89] found id: ""
	I0819 13:22:05.795023 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.795035 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:05.795044 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:05.795124 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.805919 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.304915 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.963297 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.459786 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:04.844594 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.344680 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.829634 2152780 cri.go:89] found id: ""
	I0819 13:22:05.829664 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.829673 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:05.829680 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:05.829734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:05.865739 2152780 cri.go:89] found id: ""
	I0819 13:22:05.865767 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.865775 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:05.865781 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:05.865839 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:05.901395 2152780 cri.go:89] found id: ""
	I0819 13:22:05.901425 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.901435 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:05.901443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:05.901641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:05.937162 2152780 cri.go:89] found id: ""
	I0819 13:22:05.937190 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.937200 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:05.937207 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:05.937284 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:05.974302 2152780 cri.go:89] found id: ""
	I0819 13:22:05.974332 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.974342 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:05.974353 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:05.974367 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:06.012635 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:06.012664 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:06.064093 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:06.064129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:06.077792 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:06.077825 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:06.148954 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:06.148980 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:06.148999 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:08.731539 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:08.747311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:08.747414 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:08.782881 2152780 cri.go:89] found id: ""
	I0819 13:22:08.782913 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.782921 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:08.782927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:08.782990 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:08.817533 2152780 cri.go:89] found id: ""
	I0819 13:22:08.817569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.817580 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:08.817587 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:08.817657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:08.855614 2152780 cri.go:89] found id: ""
	I0819 13:22:08.855650 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.855664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:08.855672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:08.855742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:08.896405 2152780 cri.go:89] found id: ""
	I0819 13:22:08.896439 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.896447 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:08.896455 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:08.896525 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:08.936250 2152780 cri.go:89] found id: ""
	I0819 13:22:08.936279 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.936288 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:08.936296 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:08.936366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:08.976325 2152780 cri.go:89] found id: ""
	I0819 13:22:08.976357 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.976367 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:08.976376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:08.976446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:09.012921 2152780 cri.go:89] found id: ""
	I0819 13:22:09.012953 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.012962 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:09.012967 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:09.013026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:09.051385 2152780 cri.go:89] found id: ""
	I0819 13:22:09.051420 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.051432 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:09.051444 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:09.051461 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:09.105101 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:09.105146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:09.120289 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:09.120331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:09.216182 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:09.216220 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:09.216238 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:09.302832 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:09.302876 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:07.307100 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.803522 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.461843 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.961972 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.345855 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.845869 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.848734 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:11.861754 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:11.861838 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:11.897908 2152780 cri.go:89] found id: ""
	I0819 13:22:11.897940 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.897949 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:11.897956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:11.898013 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:11.935276 2152780 cri.go:89] found id: ""
	I0819 13:22:11.935309 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.935318 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:11.935326 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:11.935386 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:11.978577 2152780 cri.go:89] found id: ""
	I0819 13:22:11.978608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.978619 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:11.978626 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:11.978697 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:12.018464 2152780 cri.go:89] found id: ""
	I0819 13:22:12.018496 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.018505 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:12.018511 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:12.018567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:12.062371 2152780 cri.go:89] found id: ""
	I0819 13:22:12.062402 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.062412 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:12.062418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:12.062476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:12.102988 2152780 cri.go:89] found id: ""
	I0819 13:22:12.103017 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.103026 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:12.103033 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:12.103088 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:12.143553 2152780 cri.go:89] found id: ""
	I0819 13:22:12.143581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.143591 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:12.143599 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:12.143660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:12.187155 2152780 cri.go:89] found id: ""
	I0819 13:22:12.187192 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.187211 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:12.187224 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:12.187251 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:12.236028 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:12.236069 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:12.250047 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:12.250084 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:12.320586 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:12.320612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:12.320624 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:12.400274 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:12.400320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:14.939735 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:14.953283 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:14.953348 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:14.993818 2152780 cri.go:89] found id: ""
	I0819 13:22:14.993851 2152780 logs.go:276] 0 containers: []
	W0819 13:22:14.993862 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:14.993870 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:14.993940 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:15.035966 2152780 cri.go:89] found id: ""
	I0819 13:22:15.035997 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.036005 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:15.036012 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:15.036073 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:15.070730 2152780 cri.go:89] found id: ""
	I0819 13:22:15.070765 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.070777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:15.070785 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:15.070855 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:15.111830 2152780 cri.go:89] found id: ""
	I0819 13:22:15.111859 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.111868 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:15.111874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:15.111928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:15.144013 2152780 cri.go:89] found id: ""
	I0819 13:22:15.144039 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.144047 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:15.144053 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:15.144107 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:15.178050 2152780 cri.go:89] found id: ""
	I0819 13:22:15.178074 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.178083 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:15.178090 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:15.178145 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:15.212622 2152780 cri.go:89] found id: ""
	I0819 13:22:15.212647 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.212656 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:15.212662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:15.212716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:15.244744 2152780 cri.go:89] found id: ""
	I0819 13:22:15.244772 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.244780 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:15.244789 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:15.244804 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:15.284232 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:15.284266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:15.336738 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:15.336774 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:15.350801 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:15.350835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:15.426424 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:15.426453 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:15.426469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:11.809160 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.303978 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.962746 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:13.964030 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.344847 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.344951 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.009087 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:18.022428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:18.022500 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:18.062900 2152780 cri.go:89] found id: ""
	I0819 13:22:18.062930 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.062941 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:18.062949 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:18.063021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:18.096711 2152780 cri.go:89] found id: ""
	I0819 13:22:18.096752 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.096764 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:18.096773 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:18.096846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:18.129199 2152780 cri.go:89] found id: ""
	I0819 13:22:18.129226 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.129246 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:18.129253 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:18.129323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:18.161907 2152780 cri.go:89] found id: ""
	I0819 13:22:18.161944 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.161955 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:18.161963 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:18.162036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:18.198514 2152780 cri.go:89] found id: ""
	I0819 13:22:18.198557 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.198570 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:18.198577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:18.198648 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:18.230114 2152780 cri.go:89] found id: ""
	I0819 13:22:18.230142 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.230152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:18.230160 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:18.230221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:18.265610 2152780 cri.go:89] found id: ""
	I0819 13:22:18.265642 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.265654 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:18.265660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:18.265729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:18.298889 2152780 cri.go:89] found id: ""
	I0819 13:22:18.298929 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.298941 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:18.298952 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:18.298967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:18.381124 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:18.381165 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:18.422361 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:18.422394 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:18.482182 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:18.482215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:18.496054 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:18.496086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:18.568459 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:16.305471 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.804592 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.461760 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.961000 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:20.961568 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.844928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.345851 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.069389 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:21.082568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:21.082636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:21.117124 2152780 cri.go:89] found id: ""
	I0819 13:22:21.117158 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.117171 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:21.117180 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:21.117272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:21.151954 2152780 cri.go:89] found id: ""
	I0819 13:22:21.151981 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.151990 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:21.151996 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:21.152047 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:21.187411 2152780 cri.go:89] found id: ""
	I0819 13:22:21.187443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.187452 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:21.187461 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:21.187528 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:21.224118 2152780 cri.go:89] found id: ""
	I0819 13:22:21.224148 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.224156 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:21.224173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:21.224247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:21.261153 2152780 cri.go:89] found id: ""
	I0819 13:22:21.261189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.261198 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:21.261205 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:21.261274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:21.305404 2152780 cri.go:89] found id: ""
	I0819 13:22:21.305433 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.305444 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:21.305453 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:21.305534 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:21.358428 2152780 cri.go:89] found id: ""
	I0819 13:22:21.358462 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.358474 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:21.358482 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:21.358561 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:21.410971 2152780 cri.go:89] found id: ""
	I0819 13:22:21.411005 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.411017 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:21.411029 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:21.411045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:21.462929 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:21.462963 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:21.476419 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:21.476446 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:21.544095 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.544116 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:21.544130 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:21.622802 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:21.622841 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.171988 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:24.185545 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:24.185624 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:24.221604 2152780 cri.go:89] found id: ""
	I0819 13:22:24.221632 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.221640 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:24.221646 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:24.221703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:24.255866 2152780 cri.go:89] found id: ""
	I0819 13:22:24.255903 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.255914 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:24.255923 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:24.255993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:24.290338 2152780 cri.go:89] found id: ""
	I0819 13:22:24.290381 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.290393 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:24.290401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:24.290465 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:24.325653 2152780 cri.go:89] found id: ""
	I0819 13:22:24.325685 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.325693 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:24.325700 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:24.325753 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:24.364081 2152780 cri.go:89] found id: ""
	I0819 13:22:24.364116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.364128 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:24.364136 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:24.364211 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:24.403464 2152780 cri.go:89] found id: ""
	I0819 13:22:24.403504 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.403516 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:24.403524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:24.403597 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:24.441205 2152780 cri.go:89] found id: ""
	I0819 13:22:24.441244 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.441256 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:24.441264 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:24.441340 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:24.478294 2152780 cri.go:89] found id: ""
	I0819 13:22:24.478324 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.478332 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:24.478341 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:24.478354 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:24.562969 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:24.563012 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.609296 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:24.609332 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:24.665537 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:24.665579 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:24.680086 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:24.680128 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:24.743465 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.305466 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.804623 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.461141 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.461784 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.844225 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.844485 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.844552 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.244137 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:27.257619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:27.257695 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:27.290229 2152780 cri.go:89] found id: ""
	I0819 13:22:27.290265 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.290274 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:27.290280 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:27.290354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:27.327185 2152780 cri.go:89] found id: ""
	I0819 13:22:27.327216 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.327225 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:27.327231 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:27.327302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:27.362949 2152780 cri.go:89] found id: ""
	I0819 13:22:27.362978 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.362999 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:27.363007 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:27.363074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:27.404085 2152780 cri.go:89] found id: ""
	I0819 13:22:27.404116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.404125 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:27.404132 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:27.404237 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:27.439884 2152780 cri.go:89] found id: ""
	I0819 13:22:27.439922 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.439933 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:27.439942 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:27.440006 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:27.473933 2152780 cri.go:89] found id: ""
	I0819 13:22:27.473961 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.473970 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:27.473976 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:27.474033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:27.510093 2152780 cri.go:89] found id: ""
	I0819 13:22:27.510121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.510131 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:27.510137 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:27.510188 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:27.549178 2152780 cri.go:89] found id: ""
	I0819 13:22:27.549205 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.549213 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:27.549223 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:27.549236 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:27.605886 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:27.605925 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:27.619506 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:27.619537 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:27.686821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:27.686849 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:27.686867 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:27.761275 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:27.761315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:30.301382 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:30.315140 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:30.315221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:30.352687 2152780 cri.go:89] found id: ""
	I0819 13:22:30.352710 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.352717 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:30.352723 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:30.352775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:30.388577 2152780 cri.go:89] found id: ""
	I0819 13:22:30.388604 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.388612 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:30.388618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:30.388671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:30.423937 2152780 cri.go:89] found id: ""
	I0819 13:22:30.423967 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.423977 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:30.423983 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:30.424039 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:30.456789 2152780 cri.go:89] found id: ""
	I0819 13:22:30.456823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.456834 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:30.456852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:30.456928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:30.495741 2152780 cri.go:89] found id: ""
	I0819 13:22:30.495778 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.495792 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:30.495801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:30.495856 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:30.532919 2152780 cri.go:89] found id: ""
	I0819 13:22:30.532947 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.532959 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:30.532972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:30.533036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:30.574556 2152780 cri.go:89] found id: ""
	I0819 13:22:30.574591 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.574602 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:30.574615 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:30.574685 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:30.610248 2152780 cri.go:89] found id: ""
	I0819 13:22:30.610280 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.610291 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:30.610303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:30.610319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:30.664302 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:30.664356 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:30.678529 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:30.678571 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:30.753412 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:30.753441 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:30.753460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:26.304403 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:28.305270 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.804282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.462817 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:29.463222 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.344268 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:32.344384 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.831754 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:30.831795 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:33.375305 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:33.390064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:33.390139 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:33.426250 2152780 cri.go:89] found id: ""
	I0819 13:22:33.426281 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.426292 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:33.426301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:33.426356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:33.464229 2152780 cri.go:89] found id: ""
	I0819 13:22:33.464255 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.464263 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:33.464270 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:33.464329 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:33.501098 2152780 cri.go:89] found id: ""
	I0819 13:22:33.501134 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.501145 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:33.501154 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:33.501223 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:33.535617 2152780 cri.go:89] found id: ""
	I0819 13:22:33.535651 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.535662 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:33.535670 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:33.535738 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:33.570553 2152780 cri.go:89] found id: ""
	I0819 13:22:33.570600 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.570613 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:33.570622 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:33.570694 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:33.605414 2152780 cri.go:89] found id: ""
	I0819 13:22:33.605454 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.605466 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:33.605477 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:33.605545 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:33.637564 2152780 cri.go:89] found id: ""
	I0819 13:22:33.637599 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.637610 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:33.637618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:33.637681 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:33.671403 2152780 cri.go:89] found id: ""
	I0819 13:22:33.671438 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.671450 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:33.671463 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:33.671482 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:33.724121 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:33.724159 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:33.737041 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:33.737070 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:33.808305 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:33.808329 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:33.808342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:33.886089 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:33.886136 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:32.806000 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:35.303503 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:31.463294 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:33.962798 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:34.344950 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.845547 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.424035 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:36.439329 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:36.439407 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:36.486348 2152780 cri.go:89] found id: ""
	I0819 13:22:36.486375 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.486383 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:36.486391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:36.486455 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:36.522796 2152780 cri.go:89] found id: ""
	I0819 13:22:36.522823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.522834 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:36.522843 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:36.522909 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:36.556528 2152780 cri.go:89] found id: ""
	I0819 13:22:36.556561 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.556573 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:36.556580 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:36.556652 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:36.592545 2152780 cri.go:89] found id: ""
	I0819 13:22:36.592581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.592591 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:36.592600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:36.592671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:36.628668 2152780 cri.go:89] found id: ""
	I0819 13:22:36.628696 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.628705 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:36.628711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:36.628765 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:36.660783 2152780 cri.go:89] found id: ""
	I0819 13:22:36.660817 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.660825 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:36.660832 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:36.660886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:36.694853 2152780 cri.go:89] found id: ""
	I0819 13:22:36.694883 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.694893 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:36.694901 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:36.694969 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:36.727767 2152780 cri.go:89] found id: ""
	I0819 13:22:36.727797 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.727805 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:36.727816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:36.727830 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:36.741032 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:36.741064 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:36.816015 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:36.816045 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:36.816062 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:36.896272 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:36.896319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:36.933895 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:36.933936 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:39.495808 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:39.511071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:39.511158 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:39.551412 2152780 cri.go:89] found id: ""
	I0819 13:22:39.551447 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.551458 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:39.551467 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:39.551523 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:39.588796 2152780 cri.go:89] found id: ""
	I0819 13:22:39.588827 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.588836 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:39.588842 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:39.588904 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:39.627115 2152780 cri.go:89] found id: ""
	I0819 13:22:39.627144 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.627153 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:39.627159 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:39.627222 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:39.662372 2152780 cri.go:89] found id: ""
	I0819 13:22:39.662405 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.662413 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:39.662420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:39.662478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:39.696239 2152780 cri.go:89] found id: ""
	I0819 13:22:39.696272 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.696282 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:39.696298 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:39.696363 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:39.731334 2152780 cri.go:89] found id: ""
	I0819 13:22:39.731372 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.731383 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:39.731391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:39.731454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:39.764368 2152780 cri.go:89] found id: ""
	I0819 13:22:39.764400 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.764412 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:39.764420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:39.764488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:39.798994 2152780 cri.go:89] found id: ""
	I0819 13:22:39.799026 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.799038 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:39.799050 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:39.799066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:39.812935 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:39.812965 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:39.882867 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:39.882898 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:39.882913 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:39.968067 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:39.968102 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:40.007122 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:40.007158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:37.304584 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.305417 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.462377 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:38.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.344771 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.355221 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:42.560811 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:42.574968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:42.575034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:42.609476 2152780 cri.go:89] found id: ""
	I0819 13:22:42.609516 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.609529 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:42.609537 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:42.609607 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:42.650763 2152780 cri.go:89] found id: ""
	I0819 13:22:42.650794 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.650804 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:42.650812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:42.650885 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:42.686330 2152780 cri.go:89] found id: ""
	I0819 13:22:42.686358 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.686369 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:42.686377 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:42.686446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:42.720937 2152780 cri.go:89] found id: ""
	I0819 13:22:42.720963 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.720971 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:42.720982 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:42.721036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:42.755079 2152780 cri.go:89] found id: ""
	I0819 13:22:42.755122 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.755132 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:42.755141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:42.755212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:42.787680 2152780 cri.go:89] found id: ""
	I0819 13:22:42.787715 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.787727 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:42.787738 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:42.787807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:42.824768 2152780 cri.go:89] found id: ""
	I0819 13:22:42.824799 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.824807 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:42.824813 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:42.824874 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:42.859902 2152780 cri.go:89] found id: ""
	I0819 13:22:42.859935 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.859946 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:42.859957 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:42.859972 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:42.911169 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:42.911215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:42.925108 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:42.925140 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:43.002506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:43.002557 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:43.002574 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:43.084408 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:43.084451 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:45.625360 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:45.638948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:45.639033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:45.675838 2152780 cri.go:89] found id: ""
	I0819 13:22:45.675874 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.675887 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:45.675895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:45.675971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:45.708726 2152780 cri.go:89] found id: ""
	I0819 13:22:45.708757 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.708768 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:45.708776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:45.708842 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:45.749704 2152780 cri.go:89] found id: ""
	I0819 13:22:45.749739 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.749752 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:45.749769 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:45.749835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:45.786426 2152780 cri.go:89] found id: ""
	I0819 13:22:45.786457 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.786469 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:45.786478 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:45.786547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:41.803829 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.804465 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.805120 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.462336 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.961582 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.845489 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.845943 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.822363 2152780 cri.go:89] found id: ""
	I0819 13:22:45.822395 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.822403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:45.822408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:45.822472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:45.856308 2152780 cri.go:89] found id: ""
	I0819 13:22:45.856337 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.856346 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:45.856353 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:45.856421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:45.893379 2152780 cri.go:89] found id: ""
	I0819 13:22:45.893417 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.893427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:45.893435 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:45.893503 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:45.929841 2152780 cri.go:89] found id: ""
	I0819 13:22:45.929868 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.929879 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:45.929890 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:45.929943 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:45.980392 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:45.980432 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:45.994781 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:45.994809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:46.065633 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:46.065658 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:46.065675 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:46.140938 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:46.140976 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.680055 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:48.692972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:48.693053 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:48.736198 2152780 cri.go:89] found id: ""
	I0819 13:22:48.736232 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.736244 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:48.736252 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:48.736332 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:48.775819 2152780 cri.go:89] found id: ""
	I0819 13:22:48.775855 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.775872 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:48.775881 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:48.775956 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:48.814090 2152780 cri.go:89] found id: ""
	I0819 13:22:48.814121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.814130 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:48.814139 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:48.814207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:48.849630 2152780 cri.go:89] found id: ""
	I0819 13:22:48.849656 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.849665 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:48.849671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:48.849730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:48.887945 2152780 cri.go:89] found id: ""
	I0819 13:22:48.887982 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.887995 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:48.888003 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:48.888072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:48.924105 2152780 cri.go:89] found id: ""
	I0819 13:22:48.924145 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.924158 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:48.924184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:48.924266 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:48.960481 2152780 cri.go:89] found id: ""
	I0819 13:22:48.960521 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.960532 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:48.960540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:48.960611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:48.998826 2152780 cri.go:89] found id: ""
	I0819 13:22:48.998856 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.998865 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:48.998874 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:48.998888 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:49.053064 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:49.053103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:49.067115 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:49.067154 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:49.146731 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:49.146753 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:49.146767 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:49.232228 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:49.232268 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.303393 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.305013 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:46.461905 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.961570 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.961743 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.344068 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.844551 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:51.775692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:51.789766 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:51.789848 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:51.830997 2152780 cri.go:89] found id: ""
	I0819 13:22:51.831029 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.831038 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:51.831045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:51.831105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:51.870122 2152780 cri.go:89] found id: ""
	I0819 13:22:51.870150 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.870160 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:51.870168 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:51.870247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:51.911170 2152780 cri.go:89] found id: ""
	I0819 13:22:51.911198 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.911206 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:51.911216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:51.911280 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:51.946159 2152780 cri.go:89] found id: ""
	I0819 13:22:51.946197 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.946218 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:51.946225 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:51.946300 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:51.981131 2152780 cri.go:89] found id: ""
	I0819 13:22:51.981160 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.981170 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:51.981176 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:51.981251 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:52.024417 2152780 cri.go:89] found id: ""
	I0819 13:22:52.024456 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.024477 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:52.024486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:52.024567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:52.062519 2152780 cri.go:89] found id: ""
	I0819 13:22:52.062552 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.062563 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:52.062571 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:52.062640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:52.106259 2152780 cri.go:89] found id: ""
	I0819 13:22:52.106291 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.106302 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:52.106314 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:52.106338 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:52.158681 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:52.158723 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:52.172489 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:52.172523 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:52.236862 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:52.236892 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:52.236910 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:52.319287 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:52.319331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:54.864061 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:54.878753 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:54.878834 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:54.913195 2152780 cri.go:89] found id: ""
	I0819 13:22:54.913225 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.913240 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:54.913247 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:54.913314 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:54.954867 2152780 cri.go:89] found id: ""
	I0819 13:22:54.954909 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.954922 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:54.954930 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:54.954995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:54.992570 2152780 cri.go:89] found id: ""
	I0819 13:22:54.992601 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.992610 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:54.992616 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:54.992683 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:55.027056 2152780 cri.go:89] found id: ""
	I0819 13:22:55.027090 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.027101 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:55.027108 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:55.027190 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:55.061941 2152780 cri.go:89] found id: ""
	I0819 13:22:55.061973 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.061985 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:55.061994 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:55.062061 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:55.100414 2152780 cri.go:89] found id: ""
	I0819 13:22:55.100443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.100455 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:55.100465 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:55.100535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:55.143797 2152780 cri.go:89] found id: ""
	I0819 13:22:55.143836 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.143848 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:55.143856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:55.143929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:55.187540 2152780 cri.go:89] found id: ""
	I0819 13:22:55.187569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.187577 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:55.187587 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:55.187600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:55.241075 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:55.241110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:55.254813 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:55.254846 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:55.325934 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:55.325955 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:55.325974 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:55.404725 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:55.404775 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:52.804423 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.303884 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:52.963783 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.461517 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:53.345242 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.346054 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.843535 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.949501 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:57.964664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:57.964730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:58.005114 2152780 cri.go:89] found id: ""
	I0819 13:22:58.005153 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.005164 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:58.005173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:58.005274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:58.038596 2152780 cri.go:89] found id: ""
	I0819 13:22:58.038629 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.038639 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:58.038644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:58.038700 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:58.081597 2152780 cri.go:89] found id: ""
	I0819 13:22:58.081631 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.081640 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:58.081650 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:58.081735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:58.120356 2152780 cri.go:89] found id: ""
	I0819 13:22:58.120389 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.120400 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:58.120408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:58.120478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:58.156313 2152780 cri.go:89] found id: ""
	I0819 13:22:58.156346 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.156358 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:58.156366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:58.156442 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:58.192198 2152780 cri.go:89] found id: ""
	I0819 13:22:58.192238 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.192249 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:58.192257 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:58.192335 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:58.226574 2152780 cri.go:89] found id: ""
	I0819 13:22:58.226608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.226622 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:58.226631 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:58.226705 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:58.260060 2152780 cri.go:89] found id: ""
	I0819 13:22:58.260088 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.260096 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:58.260106 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:58.260119 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:58.338242 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:58.338289 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:58.382245 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:58.382286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:58.436273 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:58.436314 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:58.452143 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:58.452215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:58.530133 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:57.305258 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.804142 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.462332 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.962480 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.844922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:01.030732 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:01.044083 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:01.044155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:01.083082 2152780 cri.go:89] found id: ""
	I0819 13:23:01.083114 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.083122 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:01.083129 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:01.083197 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:01.122645 2152780 cri.go:89] found id: ""
	I0819 13:23:01.122680 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.122700 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:01.122708 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:01.122777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:01.163144 2152780 cri.go:89] found id: ""
	I0819 13:23:01.163175 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.163187 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:01.163194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:01.163262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:01.201545 2152780 cri.go:89] found id: ""
	I0819 13:23:01.201587 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.201597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:01.201605 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:01.201693 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:01.237047 2152780 cri.go:89] found id: ""
	I0819 13:23:01.237075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.237086 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:01.237094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:01.237162 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:01.276531 2152780 cri.go:89] found id: ""
	I0819 13:23:01.276560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.276571 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:01.276577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:01.276633 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:01.315639 2152780 cri.go:89] found id: ""
	I0819 13:23:01.315668 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.315676 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:01.315682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:01.315734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:01.351942 2152780 cri.go:89] found id: ""
	I0819 13:23:01.351969 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.351980 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:01.351990 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:01.352006 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:01.403068 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:01.403111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:01.416608 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:01.416639 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:01.493450 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:01.493483 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:01.493501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:01.577709 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:01.577750 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:04.146865 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:04.160366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:04.160431 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:04.194463 2152780 cri.go:89] found id: ""
	I0819 13:23:04.194494 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.194504 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:04.194510 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:04.194567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:04.228327 2152780 cri.go:89] found id: ""
	I0819 13:23:04.228351 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.228359 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:04.228364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:04.228413 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:04.261613 2152780 cri.go:89] found id: ""
	I0819 13:23:04.261645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.261656 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:04.261664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:04.261740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:04.294583 2152780 cri.go:89] found id: ""
	I0819 13:23:04.294617 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.294628 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:04.294636 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:04.294712 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:04.328537 2152780 cri.go:89] found id: ""
	I0819 13:23:04.328566 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.328576 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:04.328585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:04.328657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:04.361865 2152780 cri.go:89] found id: ""
	I0819 13:23:04.361906 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.361917 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:04.361925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:04.361988 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:04.396308 2152780 cri.go:89] found id: ""
	I0819 13:23:04.396343 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.396354 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:04.396361 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:04.396433 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:04.429551 2152780 cri.go:89] found id: ""
	I0819 13:23:04.429580 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.429591 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:04.429602 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:04.429618 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:04.479557 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:04.479600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:04.493746 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:04.493781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:04.563309 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:04.563333 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:04.563347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:04.644413 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:04.644455 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:02.305047 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.804564 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.460986 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.461977 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.845028 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.345778 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.195678 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:07.209255 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:07.209323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:07.245267 2152780 cri.go:89] found id: ""
	I0819 13:23:07.245295 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.245306 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:07.245324 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:07.245394 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:07.283190 2152780 cri.go:89] found id: ""
	I0819 13:23:07.283223 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.283234 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:07.283242 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:07.283312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:07.330025 2152780 cri.go:89] found id: ""
	I0819 13:23:07.330053 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.330065 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:07.330072 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:07.330137 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:07.381649 2152780 cri.go:89] found id: ""
	I0819 13:23:07.381675 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.381684 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:07.381690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:07.381744 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:07.436759 2152780 cri.go:89] found id: ""
	I0819 13:23:07.436794 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.436806 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:07.436814 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:07.436890 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:07.475843 2152780 cri.go:89] found id: ""
	I0819 13:23:07.475876 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.475885 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:07.475891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:07.475954 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:07.510531 2152780 cri.go:89] found id: ""
	I0819 13:23:07.510560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.510569 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:07.510575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:07.510641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:07.546041 2152780 cri.go:89] found id: ""
	I0819 13:23:07.546075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.546087 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:07.546100 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:07.546118 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:07.559102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:07.559137 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:07.631002 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:07.631030 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:07.631045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:07.705711 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:07.705751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:07.745114 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:07.745148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.302515 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:10.316028 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:10.316108 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:10.357431 2152780 cri.go:89] found id: ""
	I0819 13:23:10.357468 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.357477 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:10.357483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:10.357546 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:10.393541 2152780 cri.go:89] found id: ""
	I0819 13:23:10.393568 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.393577 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:10.393583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:10.393651 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:10.429651 2152780 cri.go:89] found id: ""
	I0819 13:23:10.429684 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.429692 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:10.429698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:10.429752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:10.469626 2152780 cri.go:89] found id: ""
	I0819 13:23:10.469690 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.469701 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:10.469706 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:10.469761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:10.503015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.503048 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.503058 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:10.503065 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:10.503133 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:10.538127 2152780 cri.go:89] found id: ""
	I0819 13:23:10.538168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.538180 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:10.538188 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:10.538260 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:10.573015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.573050 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.573058 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:10.573064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:10.573134 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:10.608094 2152780 cri.go:89] found id: ""
	I0819 13:23:10.608125 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.608136 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:10.608150 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:10.608185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:10.688926 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:10.688967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:10.724554 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:10.724588 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.777619 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:10.777652 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:10.791106 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:10.791143 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:23:07.305198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.804759 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:06.961224 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.460095 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.843726 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.845072 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	W0819 13:23:10.865964 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.366960 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:13.381903 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:13.381973 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:13.420589 2152780 cri.go:89] found id: ""
	I0819 13:23:13.420620 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.420630 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:13.420637 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:13.420724 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:13.457809 2152780 cri.go:89] found id: ""
	I0819 13:23:13.457840 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.457850 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:13.457863 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:13.457927 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:13.496310 2152780 cri.go:89] found id: ""
	I0819 13:23:13.496347 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.496359 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:13.496368 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:13.496436 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:13.532057 2152780 cri.go:89] found id: ""
	I0819 13:23:13.532089 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.532097 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:13.532103 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:13.532183 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:13.575780 2152780 cri.go:89] found id: ""
	I0819 13:23:13.575814 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.575823 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:13.575829 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:13.575896 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:13.611484 2152780 cri.go:89] found id: ""
	I0819 13:23:13.611510 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.611520 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:13.611528 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:13.611594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:13.647456 2152780 cri.go:89] found id: ""
	I0819 13:23:13.647483 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.647491 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:13.647497 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:13.647569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:13.681181 2152780 cri.go:89] found id: ""
	I0819 13:23:13.681212 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.681224 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:13.681237 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:13.681266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:13.694680 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:13.694709 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:13.767546 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.767576 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:13.767597 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:13.851609 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:13.851649 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:13.893739 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:13.893778 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:12.304391 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.304840 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.962978 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.461299 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.344830 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.844720 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.446686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:16.459850 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:16.459928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:16.498671 2152780 cri.go:89] found id: ""
	I0819 13:23:16.498703 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.498712 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:16.498718 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:16.498775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:16.534655 2152780 cri.go:89] found id: ""
	I0819 13:23:16.534691 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.534704 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:16.534716 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:16.534781 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:16.571279 2152780 cri.go:89] found id: ""
	I0819 13:23:16.571317 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.571330 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:16.571337 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:16.571402 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:16.604843 2152780 cri.go:89] found id: ""
	I0819 13:23:16.604878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.604889 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:16.604897 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:16.604963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:16.641150 2152780 cri.go:89] found id: ""
	I0819 13:23:16.641181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.641190 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:16.641201 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:16.641272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:16.676737 2152780 cri.go:89] found id: ""
	I0819 13:23:16.676763 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.676773 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:16.676782 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:16.676847 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:16.709876 2152780 cri.go:89] found id: ""
	I0819 13:23:16.709908 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.709920 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:16.709928 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:16.709991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:16.745249 2152780 cri.go:89] found id: ""
	I0819 13:23:16.745277 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.745293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:16.745304 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:16.745320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:16.796364 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:16.796400 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:16.811859 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:16.811887 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:16.886118 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:16.886144 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:16.886158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.968304 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:16.968349 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.511448 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:19.523952 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:19.524023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:19.556661 2152780 cri.go:89] found id: ""
	I0819 13:23:19.556696 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.556708 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:19.556715 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:19.556784 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:19.595956 2152780 cri.go:89] found id: ""
	I0819 13:23:19.595986 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.595995 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:19.596001 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:19.596055 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:19.631993 2152780 cri.go:89] found id: ""
	I0819 13:23:19.632021 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.632030 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:19.632038 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:19.632126 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:19.667871 2152780 cri.go:89] found id: ""
	I0819 13:23:19.667924 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.667935 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:19.667944 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:19.668007 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:19.703585 2152780 cri.go:89] found id: ""
	I0819 13:23:19.703615 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.703624 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:19.703630 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:19.703684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:19.737220 2152780 cri.go:89] found id: ""
	I0819 13:23:19.737252 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.737263 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:19.737272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:19.737343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:19.775526 2152780 cri.go:89] found id: ""
	I0819 13:23:19.775570 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.775587 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:19.775595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:19.775671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:19.810611 2152780 cri.go:89] found id: ""
	I0819 13:23:19.810645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.810656 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:19.810667 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:19.810682 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.855138 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:19.855167 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:19.906988 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:19.907025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:19.920451 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:19.920481 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:19.989054 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:19.989086 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:19.989101 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.803686 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.804090 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.804422 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.463889 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.961085 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.961248 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.845050 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:21.344704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.571624 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:22.585585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:22.585667 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:22.620456 2152780 cri.go:89] found id: ""
	I0819 13:23:22.620488 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.620497 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:22.620509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:22.620562 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:22.658971 2152780 cri.go:89] found id: ""
	I0819 13:23:22.659005 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.659017 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:22.659024 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:22.659094 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:22.696236 2152780 cri.go:89] found id: ""
	I0819 13:23:22.696269 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.696281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:22.696290 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:22.696354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:22.735260 2152780 cri.go:89] found id: ""
	I0819 13:23:22.735296 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.735306 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:22.735313 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:22.735365 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.770593 2152780 cri.go:89] found id: ""
	I0819 13:23:22.770625 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.770637 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:22.770644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:22.770716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:22.806348 2152780 cri.go:89] found id: ""
	I0819 13:23:22.806375 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.806400 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:22.806418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:22.806489 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:22.843812 2152780 cri.go:89] found id: ""
	I0819 13:23:22.843841 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.843850 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:22.843858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:22.843929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:22.880607 2152780 cri.go:89] found id: ""
	I0819 13:23:22.880641 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.880655 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:22.880669 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:22.880695 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:22.930816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:22.930855 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:22.946795 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:22.946829 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:23.021065 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:23.021094 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:23.021110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:23.098083 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:23.098129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:25.641104 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:25.654226 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:25.654297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:25.689403 2152780 cri.go:89] found id: ""
	I0819 13:23:25.689429 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.689437 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:25.689445 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:25.689496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:25.725340 2152780 cri.go:89] found id: ""
	I0819 13:23:25.725380 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.725393 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:25.725401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:25.725472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:25.759133 2152780 cri.go:89] found id: ""
	I0819 13:23:25.759168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.759178 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:25.759184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:25.759253 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:25.793605 2152780 cri.go:89] found id: ""
	I0819 13:23:25.793636 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.793649 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:25.793657 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:25.793731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.804810 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.305119 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.460663 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:23.344790 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.344975 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.843772 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.828355 2152780 cri.go:89] found id: ""
	I0819 13:23:25.828391 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.828403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:25.828411 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:25.828488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:25.869621 2152780 cri.go:89] found id: ""
	I0819 13:23:25.869658 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.869669 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:25.869676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:25.869731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:25.907749 2152780 cri.go:89] found id: ""
	I0819 13:23:25.907786 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.907795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:25.907801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:25.907870 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:25.942443 2152780 cri.go:89] found id: ""
	I0819 13:23:25.942472 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.942484 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:25.942497 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:25.942515 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:26.016016 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:26.016044 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:26.016058 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:26.097427 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:26.097475 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:26.140275 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:26.140320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:26.189858 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:26.189894 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:28.705027 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:28.717999 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:28.718071 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:28.752233 2152780 cri.go:89] found id: ""
	I0819 13:23:28.752263 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.752279 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:28.752286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:28.752343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:28.786033 2152780 cri.go:89] found id: ""
	I0819 13:23:28.786071 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.786080 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:28.786086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:28.786143 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:28.821105 2152780 cri.go:89] found id: ""
	I0819 13:23:28.821135 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.821165 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:28.821173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:28.821243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:28.859179 2152780 cri.go:89] found id: ""
	I0819 13:23:28.859226 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.859237 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:28.859244 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:28.859319 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:28.894213 2152780 cri.go:89] found id: ""
	I0819 13:23:28.894250 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.894263 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:28.894272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:28.894351 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:28.927807 2152780 cri.go:89] found id: ""
	I0819 13:23:28.927836 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.927848 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:28.927856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:28.927924 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:28.965430 2152780 cri.go:89] found id: ""
	I0819 13:23:28.965464 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.965476 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:28.965485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:28.965555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:29.008894 2152780 cri.go:89] found id: ""
	I0819 13:23:29.008920 2152780 logs.go:276] 0 containers: []
	W0819 13:23:29.008928 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:29.008938 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:29.008951 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:29.089617 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:29.089644 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:29.089657 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:29.173271 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:29.173310 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:29.216057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:29.216098 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:29.267076 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:29.267131 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:27.804872 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:30.304909 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.461788 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.961548 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.844049 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.846007 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.782574 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:31.797583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:31.797655 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:31.836889 2152780 cri.go:89] found id: ""
	I0819 13:23:31.836926 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.836939 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:31.836948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:31.837021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:31.875693 2152780 cri.go:89] found id: ""
	I0819 13:23:31.875725 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.875734 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:31.875740 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:31.875813 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:31.909847 2152780 cri.go:89] found id: ""
	I0819 13:23:31.909878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.909887 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:31.909895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:31.909971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:31.944122 2152780 cri.go:89] found id: ""
	I0819 13:23:31.944155 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.944184 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:31.944193 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:31.944265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:31.979342 2152780 cri.go:89] found id: ""
	I0819 13:23:31.979373 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.979383 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:31.979390 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:31.979461 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:32.013001 2152780 cri.go:89] found id: ""
	I0819 13:23:32.013049 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.013060 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:32.013067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:32.013148 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:32.047975 2152780 cri.go:89] found id: ""
	I0819 13:23:32.048003 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.048012 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:32.048017 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:32.048074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:32.082151 2152780 cri.go:89] found id: ""
	I0819 13:23:32.082181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.082192 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:32.082205 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:32.082241 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:32.120254 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:32.120286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.171594 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:32.171634 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:32.185654 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:32.185687 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:32.251542 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:32.251568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:32.251586 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:34.831302 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:34.846902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:34.846980 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:34.883697 2152780 cri.go:89] found id: ""
	I0819 13:23:34.883731 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.883747 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:34.883756 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:34.883821 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:34.918184 2152780 cri.go:89] found id: ""
	I0819 13:23:34.918220 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.918232 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:34.918240 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:34.918321 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:34.951621 2152780 cri.go:89] found id: ""
	I0819 13:23:34.951653 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.951664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:34.951673 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:34.951735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:34.987330 2152780 cri.go:89] found id: ""
	I0819 13:23:34.987362 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.987373 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:34.987381 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:34.987454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:35.022480 2152780 cri.go:89] found id: ""
	I0819 13:23:35.022518 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.022531 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:35.022539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:35.022606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:35.059717 2152780 cri.go:89] found id: ""
	I0819 13:23:35.059753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.059769 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:35.059776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:35.059835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:35.098877 2152780 cri.go:89] found id: ""
	I0819 13:23:35.098912 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.098924 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:35.098932 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:35.098995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:35.137392 2152780 cri.go:89] found id: ""
	I0819 13:23:35.137422 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.137429 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:35.137438 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:35.137456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:35.151134 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:35.151177 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:35.216821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:35.216852 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:35.216870 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:35.300662 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:35.300717 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:35.344627 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:35.344661 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.803063 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.804202 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.962287 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.460360 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.344876 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.345734 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:37.899152 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:37.913091 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:37.913161 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:37.949684 2152780 cri.go:89] found id: ""
	I0819 13:23:37.949713 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.949722 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:37.949728 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:37.949791 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:37.984480 2152780 cri.go:89] found id: ""
	I0819 13:23:37.984509 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.984518 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:37.984524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:37.984591 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:38.022228 2152780 cri.go:89] found id: ""
	I0819 13:23:38.022264 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.022277 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:38.022285 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:38.022360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:38.056117 2152780 cri.go:89] found id: ""
	I0819 13:23:38.056147 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.056159 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:38.056184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:38.056252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:38.089157 2152780 cri.go:89] found id: ""
	I0819 13:23:38.089187 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.089204 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:38.089212 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:38.089279 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:38.123335 2152780 cri.go:89] found id: ""
	I0819 13:23:38.123369 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.123378 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:38.123384 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:38.123444 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:38.159555 2152780 cri.go:89] found id: ""
	I0819 13:23:38.159584 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.159596 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:38.159604 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:38.159671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:38.200728 2152780 cri.go:89] found id: ""
	I0819 13:23:38.200765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.200777 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:38.200789 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:38.200809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:38.251652 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:38.251684 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:38.264834 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:38.264862 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:38.341259 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:38.341287 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:38.341304 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:38.417381 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:38.417423 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:36.805360 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:39.303731 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.461458 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.962012 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.347922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.845195 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.959227 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:40.972845 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:40.972937 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:41.009180 2152780 cri.go:89] found id: ""
	I0819 13:23:41.009219 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.009233 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:41.009241 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:41.009313 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:41.044055 2152780 cri.go:89] found id: ""
	I0819 13:23:41.044088 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.044100 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:41.044109 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:41.044203 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:41.081073 2152780 cri.go:89] found id: ""
	I0819 13:23:41.081098 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.081108 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:41.081114 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:41.081166 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:41.113973 2152780 cri.go:89] found id: ""
	I0819 13:23:41.114011 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.114024 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:41.114031 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:41.114105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:41.149480 2152780 cri.go:89] found id: ""
	I0819 13:23:41.149515 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.149527 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:41.149539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:41.149632 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:41.183756 2152780 cri.go:89] found id: ""
	I0819 13:23:41.183788 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.183800 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:41.183808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:41.183884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:41.217629 2152780 cri.go:89] found id: ""
	I0819 13:23:41.217670 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.217683 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:41.217691 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:41.217755 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:41.254948 2152780 cri.go:89] found id: ""
	I0819 13:23:41.254987 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.255000 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:41.255011 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:41.255025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:41.293080 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:41.293116 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.349391 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:41.349437 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:41.363720 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:41.363751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:41.459273 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:41.459299 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:41.459315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.043239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:44.057981 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:44.058072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:44.091216 2152780 cri.go:89] found id: ""
	I0819 13:23:44.091247 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.091256 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:44.091263 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:44.091336 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:44.128729 2152780 cri.go:89] found id: ""
	I0819 13:23:44.128761 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.128772 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:44.128779 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:44.128865 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:44.167730 2152780 cri.go:89] found id: ""
	I0819 13:23:44.167765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.167777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:44.167784 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:44.167852 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:44.202323 2152780 cri.go:89] found id: ""
	I0819 13:23:44.202355 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.202366 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:44.202373 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:44.202451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:44.237030 2152780 cri.go:89] found id: ""
	I0819 13:23:44.237074 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.237088 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:44.237096 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:44.237164 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:44.273319 2152780 cri.go:89] found id: ""
	I0819 13:23:44.273356 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.273370 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:44.273378 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:44.273459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:44.313267 2152780 cri.go:89] found id: ""
	I0819 13:23:44.313300 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.313313 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:44.313322 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:44.313401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:44.353073 2152780 cri.go:89] found id: ""
	I0819 13:23:44.353099 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.353107 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:44.353115 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:44.353129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:44.366102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:44.366147 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:44.438580 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:44.438612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:44.438628 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.518200 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:44.518250 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:44.559362 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:44.559396 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.308823 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.804769 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:41.461933 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.961713 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.344606 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:45.344727 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.345821 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.114604 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:47.128989 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:47.129067 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:47.174286 2152780 cri.go:89] found id: ""
	I0819 13:23:47.174316 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.174324 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:47.174331 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:47.174391 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:47.208244 2152780 cri.go:89] found id: ""
	I0819 13:23:47.208282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.208291 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:47.208297 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:47.208360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:47.245609 2152780 cri.go:89] found id: ""
	I0819 13:23:47.245638 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.245645 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:47.245651 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:47.245703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:47.280617 2152780 cri.go:89] found id: ""
	I0819 13:23:47.280646 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.280655 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:47.280661 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:47.280726 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:47.317662 2152780 cri.go:89] found id: ""
	I0819 13:23:47.317695 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.317708 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:47.317719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:47.317789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:47.359113 2152780 cri.go:89] found id: ""
	I0819 13:23:47.359143 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.359152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:47.359158 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:47.359243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:47.396520 2152780 cri.go:89] found id: ""
	I0819 13:23:47.396550 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.396561 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:47.396568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:47.396636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:47.433043 2152780 cri.go:89] found id: ""
	I0819 13:23:47.433075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.433086 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:47.433097 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:47.433111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:47.491984 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:47.492023 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:47.505775 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:47.505805 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:47.573541 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:47.573568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:47.573584 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:47.657102 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:47.657148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.201855 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:50.216186 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:50.216250 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:50.254505 2152780 cri.go:89] found id: ""
	I0819 13:23:50.254542 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.254554 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:50.254561 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:50.254627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:50.288971 2152780 cri.go:89] found id: ""
	I0819 13:23:50.289006 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.289015 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:50.289021 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:50.289086 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:50.325243 2152780 cri.go:89] found id: ""
	I0819 13:23:50.325282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.325293 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:50.325302 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:50.325377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:50.361776 2152780 cri.go:89] found id: ""
	I0819 13:23:50.361810 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.361822 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:50.361830 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:50.361892 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:50.395380 2152780 cri.go:89] found id: ""
	I0819 13:23:50.395412 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.395422 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:50.395428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:50.395496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:50.429299 2152780 cri.go:89] found id: ""
	I0819 13:23:50.429336 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.429348 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:50.429356 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:50.429423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:50.471214 2152780 cri.go:89] found id: ""
	I0819 13:23:50.471236 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.471243 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:50.471249 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:50.471302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:50.510258 2152780 cri.go:89] found id: ""
	I0819 13:23:50.510298 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.510309 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:50.510322 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:50.510342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.547398 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:50.547431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:50.598425 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:50.598466 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:50.613308 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:50.613341 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:50.685803 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:50.685834 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:50.685851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:46.303155 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.305240 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.804319 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:46.461162 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.961356 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.961557 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:49.844368 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.345173 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:53.264007 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:53.277505 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:53.277596 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:53.320407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.320436 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.320444 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:53.320450 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:53.320508 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:53.364578 2152780 cri.go:89] found id: ""
	I0819 13:23:53.364628 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.364650 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:53.364660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:53.364732 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:53.404407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.404437 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.404446 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:53.404452 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:53.404522 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:53.440743 2152780 cri.go:89] found id: ""
	I0819 13:23:53.440775 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.440784 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:53.440791 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:53.440858 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:53.476151 2152780 cri.go:89] found id: ""
	I0819 13:23:53.476200 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.476211 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:53.476217 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:53.476297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:53.512531 2152780 cri.go:89] found id: ""
	I0819 13:23:53.512559 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.512567 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:53.512573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:53.512640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:53.562487 2152780 cri.go:89] found id: ""
	I0819 13:23:53.562524 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.562535 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:53.562543 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:53.562617 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:53.627507 2152780 cri.go:89] found id: ""
	I0819 13:23:53.627545 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.627556 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:53.627566 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:53.627587 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:53.686224 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:53.686270 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:53.700659 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:53.700693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:53.776506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:53.776544 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:53.776564 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:53.855375 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:53.855412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:53.305180 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.803826 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.962161 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:54.345531 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.845217 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.398105 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:56.411245 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:56.411312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:56.444097 2152780 cri.go:89] found id: ""
	I0819 13:23:56.444129 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.444138 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:56.444144 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:56.444214 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:56.480958 2152780 cri.go:89] found id: ""
	I0819 13:23:56.480993 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.481004 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:56.481013 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:56.481077 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:56.514657 2152780 cri.go:89] found id: ""
	I0819 13:23:56.514687 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.514696 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:56.514707 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:56.514762 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:56.548615 2152780 cri.go:89] found id: ""
	I0819 13:23:56.548647 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.548659 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:56.548667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:56.548735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:56.585801 2152780 cri.go:89] found id: ""
	I0819 13:23:56.585833 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.585844 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:56.585852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:56.585916 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:56.617804 2152780 cri.go:89] found id: ""
	I0819 13:23:56.617832 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.617843 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:56.617851 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:56.617919 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:56.655726 2152780 cri.go:89] found id: ""
	I0819 13:23:56.655753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.655761 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:56.655768 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:56.655820 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:56.691057 2152780 cri.go:89] found id: ""
	I0819 13:23:56.691087 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.691099 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:56.691111 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:56.691127 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:56.758836 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:56.758860 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:56.758879 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:56.836388 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:56.836427 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:56.875753 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:56.875781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:56.928126 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:56.928184 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.442721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:59.456581 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:59.456742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:59.492452 2152780 cri.go:89] found id: ""
	I0819 13:23:59.492480 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.492488 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:59.492494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:59.492544 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:59.527258 2152780 cri.go:89] found id: ""
	I0819 13:23:59.527291 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.527301 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:59.527311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:59.527366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:59.562712 2152780 cri.go:89] found id: ""
	I0819 13:23:59.562744 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.562757 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:59.562765 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:59.562836 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:59.597522 2152780 cri.go:89] found id: ""
	I0819 13:23:59.597557 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.597568 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:59.597577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:59.597649 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:59.638820 2152780 cri.go:89] found id: ""
	I0819 13:23:59.638852 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.638865 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:59.638873 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:59.638941 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:59.682623 2152780 cri.go:89] found id: ""
	I0819 13:23:59.682656 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.682667 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:59.682676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:59.682746 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:59.718731 2152780 cri.go:89] found id: ""
	I0819 13:23:59.718782 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.718795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:59.718808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:59.718881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:59.754799 2152780 cri.go:89] found id: ""
	I0819 13:23:59.754830 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.754840 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:59.754866 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:59.754890 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:59.807650 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:59.807683 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.821699 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:59.821727 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:59.901940 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:59.901962 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:59.901975 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:59.979019 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:59.979055 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:57.804198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:00.303807 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:57.960616 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.961190 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.344403 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:01.344800 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.519279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:02.532541 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:02.532606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:02.566287 2152780 cri.go:89] found id: ""
	I0819 13:24:02.566327 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.566338 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:02.566347 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:02.566423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:02.599874 2152780 cri.go:89] found id: ""
	I0819 13:24:02.599907 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.599920 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:02.599927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:02.600000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:02.633609 2152780 cri.go:89] found id: ""
	I0819 13:24:02.633642 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.633662 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:02.633671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:02.633741 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:02.669105 2152780 cri.go:89] found id: ""
	I0819 13:24:02.669133 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.669142 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:02.669151 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:02.669245 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:02.704719 2152780 cri.go:89] found id: ""
	I0819 13:24:02.704756 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.704769 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:02.704777 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:02.704846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:02.744694 2152780 cri.go:89] found id: ""
	I0819 13:24:02.744726 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.744736 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:02.744743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:02.744818 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.779911 2152780 cri.go:89] found id: ""
	I0819 13:24:02.779945 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.779959 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:02.779969 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:02.780034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:02.815235 2152780 cri.go:89] found id: ""
	I0819 13:24:02.815277 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.815293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:02.815303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:02.815319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:02.864822 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:02.864852 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:02.879626 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:02.879655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:02.951475 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:02.951504 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:02.951522 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:03.031154 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:03.031196 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.573907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:05.586737 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:05.586798 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:05.623912 2152780 cri.go:89] found id: ""
	I0819 13:24:05.623946 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.623957 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:05.623965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:05.624037 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:05.659399 2152780 cri.go:89] found id: ""
	I0819 13:24:05.659427 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.659435 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:05.659441 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:05.659494 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:05.693173 2152780 cri.go:89] found id: ""
	I0819 13:24:05.693207 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.693219 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:05.693228 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:05.693289 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:05.725641 2152780 cri.go:89] found id: ""
	I0819 13:24:05.725670 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.725680 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:05.725688 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:05.725752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:05.759559 2152780 cri.go:89] found id: ""
	I0819 13:24:05.759593 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.759606 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:05.759614 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:05.759686 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:05.793022 2152780 cri.go:89] found id: ""
	I0819 13:24:05.793051 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.793063 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:05.793071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:05.793147 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.304309 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.304491 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.461249 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.962079 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:03.344928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.845172 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.828997 2152780 cri.go:89] found id: ""
	I0819 13:24:05.829027 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.829037 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:05.829045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:05.829111 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:05.862852 2152780 cri.go:89] found id: ""
	I0819 13:24:05.862885 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.862895 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:05.862908 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:05.862924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.901295 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:05.901334 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:05.956106 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:05.956146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:05.971498 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:05.971525 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:06.040630 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:06.040653 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:06.040670 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:08.617393 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:08.631902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:08.631971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:08.669456 2152780 cri.go:89] found id: ""
	I0819 13:24:08.669512 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.669526 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:08.669539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:08.669630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:08.704825 2152780 cri.go:89] found id: ""
	I0819 13:24:08.704857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.704866 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:08.704874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:08.704933 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:08.738841 2152780 cri.go:89] found id: ""
	I0819 13:24:08.738879 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.738893 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:08.738902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:08.738961 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:08.775865 2152780 cri.go:89] found id: ""
	I0819 13:24:08.775899 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.775909 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:08.775918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:08.775981 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:08.813452 2152780 cri.go:89] found id: ""
	I0819 13:24:08.813482 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.813493 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:08.813501 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:08.813572 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:08.850638 2152780 cri.go:89] found id: ""
	I0819 13:24:08.850669 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.850681 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:08.850690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:08.850758 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:08.886750 2152780 cri.go:89] found id: ""
	I0819 13:24:08.886789 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.886801 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:08.886812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:08.886881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:08.922112 2152780 cri.go:89] found id: ""
	I0819 13:24:08.922145 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.922154 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:08.922164 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:08.922181 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:08.997014 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:08.997042 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:08.997059 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:09.074338 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:09.074388 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:09.114430 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:09.114460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:09.169581 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:09.169620 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:06.804761 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.809429 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:07.465846 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:09.961611 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.343646 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:10.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:12.844008 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.685453 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:11.698438 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:11.698509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:11.730814 2152780 cri.go:89] found id: ""
	I0819 13:24:11.730843 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.730852 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:11.730858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:11.730914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:11.768426 2152780 cri.go:89] found id: ""
	I0819 13:24:11.768460 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.768473 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:11.768483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:11.768555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:11.802718 2152780 cri.go:89] found id: ""
	I0819 13:24:11.802753 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.802765 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:11.802774 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:11.802841 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:11.838016 2152780 cri.go:89] found id: ""
	I0819 13:24:11.838058 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.838066 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:11.838073 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:11.838129 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:11.875869 2152780 cri.go:89] found id: ""
	I0819 13:24:11.875896 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.875904 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:11.875911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:11.875967 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:11.911538 2152780 cri.go:89] found id: ""
	I0819 13:24:11.911572 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.911580 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:11.911586 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:11.911640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:11.947762 2152780 cri.go:89] found id: ""
	I0819 13:24:11.947793 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.947803 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:11.947810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:11.947884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:11.989671 2152780 cri.go:89] found id: ""
	I0819 13:24:11.989706 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.989715 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:11.989726 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:11.989744 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:12.054707 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:12.054733 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:12.054749 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:12.137525 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:12.137572 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:12.175806 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:12.175834 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:12.227606 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:12.227651 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:14.743968 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:14.757178 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:14.757252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:14.791379 2152780 cri.go:89] found id: ""
	I0819 13:24:14.791409 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.791421 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:14.791430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:14.791510 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:14.826221 2152780 cri.go:89] found id: ""
	I0819 13:24:14.826250 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.826259 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:14.826265 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:14.826327 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:14.870239 2152780 cri.go:89] found id: ""
	I0819 13:24:14.870269 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.870281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:14.870345 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:14.870450 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:14.907473 2152780 cri.go:89] found id: ""
	I0819 13:24:14.907505 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.907515 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:14.907524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:14.907594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:14.942547 2152780 cri.go:89] found id: ""
	I0819 13:24:14.942585 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.942594 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:14.942600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:14.942658 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:14.982507 2152780 cri.go:89] found id: ""
	I0819 13:24:14.982536 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.982547 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:14.982555 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:14.982627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:15.020572 2152780 cri.go:89] found id: ""
	I0819 13:24:15.020604 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.020612 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:15.020619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:15.020691 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:15.057076 2152780 cri.go:89] found id: ""
	I0819 13:24:15.057105 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.057114 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:15.057123 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:15.057138 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:15.109111 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:15.109149 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:15.124049 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:15.124076 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:15.187413 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:15.187444 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:15.187460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:15.271009 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:15.271049 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:11.303722 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.804077 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:15.804497 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.961757 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.961949 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:14.844944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.344565 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.810205 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:17.823344 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:17.823417 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:17.862704 2152780 cri.go:89] found id: ""
	I0819 13:24:17.862734 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.862743 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:17.862750 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:17.862807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:17.900371 2152780 cri.go:89] found id: ""
	I0819 13:24:17.900404 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.900414 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:17.900420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:17.900476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:17.935518 2152780 cri.go:89] found id: ""
	I0819 13:24:17.935549 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.935559 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:17.935566 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:17.935622 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:17.972592 2152780 cri.go:89] found id: ""
	I0819 13:24:17.972624 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.972637 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:17.972647 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:17.972717 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:18.008636 2152780 cri.go:89] found id: ""
	I0819 13:24:18.008675 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.008686 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:18.008698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:18.008775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:18.040823 2152780 cri.go:89] found id: ""
	I0819 13:24:18.040857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.040866 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:18.040872 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:18.040934 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:18.079443 2152780 cri.go:89] found id: ""
	I0819 13:24:18.079472 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.079481 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:18.079488 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:18.079547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:18.118717 2152780 cri.go:89] found id: ""
	I0819 13:24:18.118746 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.118756 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:18.118766 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:18.118780 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:18.172588 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:18.172631 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:18.186427 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:18.186457 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:18.258534 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:18.258567 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:18.258585 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:18.335882 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:18.335922 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:17.804682 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.805544 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:16.461777 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:18.462005 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:20.960925 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.345299 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:21.843704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.844383 2152365 pod_ready.go:82] duration metric: took 4m0.006413479s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:22.844420 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 13:24:22.844431 2152365 pod_ready.go:39] duration metric: took 4m3.697261125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:22.844451 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:24:22.844496 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:22.844605 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:22.894620 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:22.894650 2152365 cri.go:89] found id: ""
	I0819 13:24:22.894661 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:22.894731 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.899300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:22.899365 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:22.934260 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:22.934285 2152365 cri.go:89] found id: ""
	I0819 13:24:22.934320 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:22.934380 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.939865 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:22.939927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:22.982433 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:22.982464 2152365 cri.go:89] found id: ""
	I0819 13:24:22.982475 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:22.982550 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.987038 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:22.987119 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:23.025231 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.025264 2152365 cri.go:89] found id: ""
	I0819 13:24:23.025275 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:23.025349 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.030567 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:23.030633 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:23.073663 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.073687 2152365 cri.go:89] found id: ""
	I0819 13:24:23.073695 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:23.073753 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.078563 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:23.078646 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:23.114597 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.114621 2152365 cri.go:89] found id: ""
	I0819 13:24:23.114629 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:23.114688 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.119022 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:23.119091 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:23.161263 2152365 cri.go:89] found id: ""
	I0819 13:24:23.161302 2152365 logs.go:276] 0 containers: []
	W0819 13:24:23.161311 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:23.161317 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:23.161373 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:23.211140 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.211167 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:23.211173 2152365 cri.go:89] found id: ""
	I0819 13:24:23.211182 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:23.211271 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.215875 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.220021 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:23.220046 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.283965 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:23.284004 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:20.878687 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:20.891965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:20.892042 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:20.926029 2152780 cri.go:89] found id: ""
	I0819 13:24:20.926057 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.926070 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:20.926077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:20.926142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:20.961025 2152780 cri.go:89] found id: ""
	I0819 13:24:20.961056 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.961067 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:20.961077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:20.961142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:20.993767 2152780 cri.go:89] found id: ""
	I0819 13:24:20.993800 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.993813 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:20.993821 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:20.993886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:21.025250 2152780 cri.go:89] found id: ""
	I0819 13:24:21.025285 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.025294 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:21.025301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:21.025356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:21.057832 2152780 cri.go:89] found id: ""
	I0819 13:24:21.057863 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.057871 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:21.057877 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:21.057930 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:21.092372 2152780 cri.go:89] found id: ""
	I0819 13:24:21.092400 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.092408 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:21.092419 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:21.092486 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:21.126138 2152780 cri.go:89] found id: ""
	I0819 13:24:21.126177 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.126187 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:21.126194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:21.126262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:21.157779 2152780 cri.go:89] found id: ""
	I0819 13:24:21.157806 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.157814 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:21.157823 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:21.157835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:21.240776 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:21.240813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:21.278705 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:21.278742 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:21.331086 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:21.331125 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:21.346399 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:21.346431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:21.424929 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:23.925325 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:23.938286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:23.938380 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:23.974038 2152780 cri.go:89] found id: ""
	I0819 13:24:23.974070 2152780 logs.go:276] 0 containers: []
	W0819 13:24:23.974079 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:23.974086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:23.974144 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:24.007828 2152780 cri.go:89] found id: ""
	I0819 13:24:24.007865 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.007876 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:24.007883 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:24.007951 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:24.042152 2152780 cri.go:89] found id: ""
	I0819 13:24:24.042182 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.042192 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:24.042198 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:24.042252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:24.076334 2152780 cri.go:89] found id: ""
	I0819 13:24:24.076366 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.076376 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:24.076382 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:24.076451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:24.116333 2152780 cri.go:89] found id: ""
	I0819 13:24:24.116368 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.116380 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:24.116388 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:24.116449 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:24.157126 2152780 cri.go:89] found id: ""
	I0819 13:24:24.157154 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.157163 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:24.157170 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:24.157224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:24.197124 2152780 cri.go:89] found id: ""
	I0819 13:24:24.197153 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.197161 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:24.197167 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:24.197224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:24.232662 2152780 cri.go:89] found id: ""
	I0819 13:24:24.232690 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.232698 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:24.232709 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:24.232721 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.310535 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.310573 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.351436 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:24.351469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:24.409022 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.409061 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.423392 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.423426 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:24.499638 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:22.305024 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:24.305228 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.961886 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:25.460596 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.000428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:27.016949 2152780 kubeadm.go:597] duration metric: took 4m2.883005064s to restartPrimaryControlPlane
	W0819 13:24:27.017029 2152780 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:27.017066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:27.478402 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:27.494351 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:24:27.505001 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:24:27.515444 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:24:27.515467 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:24:27.515523 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:24:27.524646 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:24:27.524697 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:24:27.535343 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:24:27.544402 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:24:27.544456 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:24:27.554655 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.564636 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:24:27.564709 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.575078 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:24:27.584856 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:24:27.584924 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:24:27.595099 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:24:27.675533 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:24:27.675638 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:24:27.827613 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:24:27.827770 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:24:27.827924 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:24:28.032127 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:24:23.323805 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:23.323834 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:23.400053 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:23.400107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:23.438155 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:23.438196 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.483737 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:23.483772 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:23.536000 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:23.536042 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.574211 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:23.574258 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.620141 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:23.620203 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.130515 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.130556 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.185924 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.185959 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.202306 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.202348 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:24.342635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:24.342675 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.895433 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:26.911884 2152365 api_server.go:72] duration metric: took 4m15.522363036s to wait for apiserver process to appear ...
	I0819 13:24:26.911914 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:24:26.911959 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:26.912026 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:26.949386 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.949423 2152365 cri.go:89] found id: ""
	I0819 13:24:26.949436 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:26.949509 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.954854 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:26.954937 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:26.993702 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:26.993732 2152365 cri.go:89] found id: ""
	I0819 13:24:26.993746 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:26.993806 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.997800 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:26.997873 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:27.044488 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:27.044521 2152365 cri.go:89] found id: ""
	I0819 13:24:27.044532 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:27.044612 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.049411 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:27.049488 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:27.098304 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.098328 2152365 cri.go:89] found id: ""
	I0819 13:24:27.098336 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:27.098388 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.102393 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:27.102448 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:27.138927 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.138952 2152365 cri.go:89] found id: ""
	I0819 13:24:27.138960 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:27.139019 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.143058 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:27.143113 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:27.183816 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:27.183846 2152365 cri.go:89] found id: ""
	I0819 13:24:27.183856 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:27.183918 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.188818 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:27.188880 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:27.231255 2152365 cri.go:89] found id: ""
	I0819 13:24:27.231286 2152365 logs.go:276] 0 containers: []
	W0819 13:24:27.231294 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:27.231300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:27.231352 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:27.265165 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.265189 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:27.265194 2152365 cri.go:89] found id: ""
	I0819 13:24:27.265204 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:27.265267 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.269294 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.273771 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:27.273796 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.315407 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:27.315446 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.359309 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:27.359343 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:27.817261 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:27.817316 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:27.834635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:27.834676 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:27.888137 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:27.888194 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:27.940367 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:27.940402 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.981404 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:27.981448 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:28.026666 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:28.026702 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:28.084387 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:28.084419 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:28.160615 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:28.160667 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:28.276547 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:28.276584 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:28.033928 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:24:28.034048 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:24:28.034129 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:24:28.034243 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:24:28.034320 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:24:28.034497 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:24:28.034852 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:24:28.035397 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:24:28.035736 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:24:28.036257 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:24:28.036850 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:24:28.037014 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:24:28.037102 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:24:28.386155 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:24:28.571457 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:24:28.835239 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:24:28.967421 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:24:28.990941 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:24:28.991082 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:24:28.991148 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:24:29.149979 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:24:29.151782 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:24:29.151943 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:24:29.161108 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:24:29.161209 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:24:29.161316 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:24:29.172129 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:24:26.804321 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.304110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.461339 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.462208 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:28.316651 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:28.316701 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:30.876679 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:24:30.882671 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:24:30.883817 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:24:30.883842 2152365 api_server.go:131] duration metric: took 3.971920414s to wait for apiserver health ...
	I0819 13:24:30.883850 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:24:30.883876 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:30.883927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:30.921225 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:30.921255 2152365 cri.go:89] found id: ""
	I0819 13:24:30.921266 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:30.921336 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.925961 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:30.926033 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:30.963834 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:30.963861 2152365 cri.go:89] found id: ""
	I0819 13:24:30.963872 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:30.963947 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.968020 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:30.968085 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:31.004690 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.004717 2152365 cri.go:89] found id: ""
	I0819 13:24:31.004726 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:31.004784 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.008770 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:31.008832 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:31.045412 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.045440 2152365 cri.go:89] found id: ""
	I0819 13:24:31.045451 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:31.045508 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.049682 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:31.049759 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:31.090853 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.090946 2152365 cri.go:89] found id: ""
	I0819 13:24:31.090958 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:31.091037 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.095245 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:31.095311 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:31.138661 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.138693 2152365 cri.go:89] found id: ""
	I0819 13:24:31.138705 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:31.138775 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.143216 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:31.143283 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:31.186554 2152365 cri.go:89] found id: ""
	I0819 13:24:31.186580 2152365 logs.go:276] 0 containers: []
	W0819 13:24:31.186589 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:31.186595 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:31.186645 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:31.226218 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.226257 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:31.226261 2152365 cri.go:89] found id: ""
	I0819 13:24:31.226270 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:31.226333 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.230602 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.234269 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:31.234290 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:31.299071 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:31.299107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.336594 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:31.336633 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:31.745390 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:31.745441 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:31.797446 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:31.797479 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.838882 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:31.838916 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.876830 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:31.876871 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.935426 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:31.935461 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.975182 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:31.975212 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:31.989631 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:31.989663 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:32.093154 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:32.093190 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:32.142869 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:32.142906 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:32.179008 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:32.179043 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:34.726569 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:24:34.726604 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.726609 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.726613 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.726619 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.726622 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.726625 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.726633 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.726636 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.726644 2152365 system_pods.go:74] duration metric: took 3.84278789s to wait for pod list to return data ...
	I0819 13:24:34.726654 2152365 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:24:34.729710 2152365 default_sa.go:45] found service account: "default"
	I0819 13:24:34.729745 2152365 default_sa.go:55] duration metric: took 3.082291ms for default service account to be created ...
	I0819 13:24:34.729758 2152365 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:24:34.734698 2152365 system_pods.go:86] 8 kube-system pods found
	I0819 13:24:34.734734 2152365 system_pods.go:89] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.734744 2152365 system_pods.go:89] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.734752 2152365 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.734758 2152365 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.734765 2152365 system_pods.go:89] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.734768 2152365 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.734775 2152365 system_pods.go:89] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.734784 2152365 system_pods.go:89] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.734792 2152365 system_pods.go:126] duration metric: took 5.027633ms to wait for k8s-apps to be running ...
	I0819 13:24:34.734802 2152365 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:24:34.734863 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:34.756925 2152365 system_svc.go:56] duration metric: took 22.110614ms WaitForService to wait for kubelet
	I0819 13:24:34.756964 2152365 kubeadm.go:582] duration metric: took 4m23.367451416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:24:34.757007 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:24:34.760465 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:24:34.760497 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:24:34.760510 2152365 node_conditions.go:105] duration metric: took 3.498566ms to run NodePressure ...
	I0819 13:24:34.760524 2152365 start.go:241] waiting for startup goroutines ...
	I0819 13:24:34.760531 2152365 start.go:246] waiting for cluster config update ...
	I0819 13:24:34.760552 2152365 start.go:255] writing updated cluster config ...
	I0819 13:24:34.760888 2152365 ssh_runner.go:195] Run: rm -f paused
	I0819 13:24:34.814139 2152365 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:24:34.815906 2152365 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-038572" cluster and "default" namespace by default
	I0819 13:24:31.305059 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.804408 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.804797 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:31.462264 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.961404 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.961719 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.305696 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.804500 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.460369 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.462138 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.804768 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.804803 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.462412 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.961368 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:46.804973 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.304762 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:47.460607 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.461589 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.804815 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:54.304854 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.961120 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:53.962098 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:55.962219 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:56.804112 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:57.298511 2153523 pod_ready.go:82] duration metric: took 4m0.000785617s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:57.298548 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:24:57.298569 2153523 pod_ready.go:39] duration metric: took 4m6.439346081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:57.298603 2153523 kubeadm.go:597] duration metric: took 4m16.103432347s to restartPrimaryControlPlane
	W0819 13:24:57.298676 2153523 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:57.298707 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:58.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:00.961514 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:03.460885 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:05.461736 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:09.173035 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:25:09.173672 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:09.173913 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:07.962080 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:10.461273 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:14.174468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:14.174714 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:12.961725 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:15.461361 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:17.461731 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:19.961772 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:23.413286 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.114546859s)
	I0819 13:25:23.413389 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:23.440190 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:25:23.450707 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:25:23.462984 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:25:23.463003 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:25:23.463046 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:25:23.472381 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:25:23.472437 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:25:23.482090 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:25:23.491789 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:25:23.491845 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:25:23.501722 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.510879 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:25:23.510936 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.520464 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:25:23.529582 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:25:23.529642 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:25:23.539089 2153523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:25:23.585240 2153523 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:25:23.585319 2153523 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:25:23.712867 2153523 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:25:23.712994 2153523 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:25:23.713118 2153523 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:25:23.722519 2153523 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:25:23.724354 2153523 out.go:235]   - Generating certificates and keys ...
	I0819 13:25:23.724436 2153523 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:25:23.724517 2153523 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:25:23.724598 2153523 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:25:23.724648 2153523 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:25:23.724702 2153523 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:25:23.724745 2153523 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:25:23.724795 2153523 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:25:23.724847 2153523 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:25:23.724904 2153523 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:25:23.725051 2153523 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:25:23.725125 2153523 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:25:23.725238 2153523 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:25:23.779977 2153523 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:25:23.930648 2153523 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:25:24.124057 2153523 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:25:24.314884 2153523 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:25:24.494351 2153523 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:25:24.495024 2153523 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:25:24.497778 2153523 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:25:24.175179 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:24.175432 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:24.499543 2153523 out.go:235]   - Booting up control plane ...
	I0819 13:25:24.499671 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:25:24.499762 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:25:24.499864 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:25:24.520379 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:25:24.532777 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:25:24.532853 2153523 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:25:24.666433 2153523 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:25:24.666580 2153523 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:25:25.168265 2153523 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.973317ms
	I0819 13:25:25.168373 2153523 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:25:21.961823 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:24.462014 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:30.169290 2153523 kubeadm.go:310] [api-check] The API server is healthy after 5.001301539s
	I0819 13:25:30.182039 2153523 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:25:30.198499 2153523 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:25:30.224273 2153523 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:25:30.224513 2153523 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-823470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:25:30.236397 2153523 kubeadm.go:310] [bootstrap-token] Using token: uoaia1.lo1g4dbd1uwo2t3t
	I0819 13:25:30.237712 2153523 out.go:235]   - Configuring RBAC rules ...
	I0819 13:25:30.237846 2153523 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:25:30.245121 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:25:30.254416 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:25:30.257701 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:25:30.262104 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:25:30.266592 2153523 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:25:30.577306 2153523 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:25:26.961302 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:28.962130 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:31.051204 2153523 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:25:31.576457 2153523 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:25:31.577557 2153523 kubeadm.go:310] 
	I0819 13:25:31.577644 2153523 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:25:31.577653 2153523 kubeadm.go:310] 
	I0819 13:25:31.577728 2153523 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:25:31.577738 2153523 kubeadm.go:310] 
	I0819 13:25:31.577786 2153523 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:25:31.577872 2153523 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:25:31.577949 2153523 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:25:31.577959 2153523 kubeadm.go:310] 
	I0819 13:25:31.578041 2153523 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:25:31.578051 2153523 kubeadm.go:310] 
	I0819 13:25:31.578111 2153523 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:25:31.578120 2153523 kubeadm.go:310] 
	I0819 13:25:31.578174 2153523 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:25:31.578266 2153523 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:25:31.578354 2153523 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:25:31.578383 2153523 kubeadm.go:310] 
	I0819 13:25:31.578524 2153523 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:25:31.578624 2153523 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:25:31.578641 2153523 kubeadm.go:310] 
	I0819 13:25:31.578765 2153523 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.578886 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:25:31.578922 2153523 kubeadm.go:310] 	--control-plane 
	I0819 13:25:31.578931 2153523 kubeadm.go:310] 
	I0819 13:25:31.579045 2153523 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:25:31.579055 2153523 kubeadm.go:310] 
	I0819 13:25:31.579151 2153523 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.579277 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:25:31.580242 2153523 kubeadm.go:310] W0819 13:25:23.534803    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580611 2153523 kubeadm.go:310] W0819 13:25:23.535422    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580758 2153523 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:25:31.580793 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:25:31.580803 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:25:31.582322 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:25:31.583520 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:25:31.594967 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:25:31.613491 2153523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:25:31.613590 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-823470 minikube.k8s.io/updated_at=2024_08_19T13_25_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=embed-certs-823470 minikube.k8s.io/primary=true
	I0819 13:25:31.613591 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.644190 2153523 ops.go:34] apiserver oom_adj: -16
	I0819 13:25:31.824270 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.324994 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.824955 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.325186 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.824797 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.325222 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.824306 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.325248 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.825154 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.461327 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:33.461401 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:35.455588 2151869 pod_ready.go:82] duration metric: took 4m0.000741427s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	E0819 13:25:35.455630 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:25:35.455654 2151869 pod_ready.go:39] duration metric: took 4m14.499879559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:35.455689 2151869 kubeadm.go:597] duration metric: took 4m22.29129413s to restartPrimaryControlPlane
	W0819 13:25:35.455763 2151869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:25:35.455798 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:25:36.324463 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:36.464343 2153523 kubeadm.go:1113] duration metric: took 4.850819101s to wait for elevateKubeSystemPrivileges
	I0819 13:25:36.464399 2153523 kubeadm.go:394] duration metric: took 4m55.327792066s to StartCluster
	I0819 13:25:36.464441 2153523 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.464553 2153523 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:25:36.466407 2153523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.466694 2153523 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:25:36.466786 2153523 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:25:36.466883 2153523 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-823470"
	I0819 13:25:36.466903 2153523 addons.go:69] Setting default-storageclass=true in profile "embed-certs-823470"
	I0819 13:25:36.466908 2153523 addons.go:69] Setting metrics-server=true in profile "embed-certs-823470"
	I0819 13:25:36.466924 2153523 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-823470"
	W0819 13:25:36.466935 2153523 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:25:36.466943 2153523 addons.go:234] Setting addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:36.466949 2153523 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-823470"
	W0819 13:25:36.466961 2153523 addons.go:243] addon metrics-server should already be in state true
	I0819 13:25:36.466968 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:25:36.466987 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.466998 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.467388 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467421 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467422 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467424 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.468449 2153523 out.go:177] * Verifying Kubernetes components...
	I0819 13:25:36.469946 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:25:36.484726 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0819 13:25:36.484740 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I0819 13:25:36.484824 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0819 13:25:36.485224 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485280 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485303 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485822 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485841 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485824 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485881 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485956 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485982 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.486206 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486260 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486267 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486382 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.486787 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486803 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486819 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.486836 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.490147 2153523 addons.go:234] Setting addon default-storageclass=true in "embed-certs-823470"
	W0819 13:25:36.490169 2153523 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:25:36.490198 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.490599 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.490628 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.506245 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
	I0819 13:25:36.506573 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0819 13:25:36.506955 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507024 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507467 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507485 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507599 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507619 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507896 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.507899 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.508055 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.508657 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.508682 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.509546 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.511452 2153523 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:25:36.512660 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:25:36.512680 2153523 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:25:36.512702 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.515934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516401 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.516422 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516608 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.516810 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.517007 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.517160 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.517459 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0819 13:25:36.517826 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.518316 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.518339 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.518689 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.518859 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.520336 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.521729 2153523 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:25:36.522775 2153523 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.522793 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:25:36.522813 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.525839 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526467 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.526505 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526736 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.527191 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.527384 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.527540 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.527918 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0819 13:25:36.528354 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.528868 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.528890 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.529243 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.529418 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.530881 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.531125 2153523 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.531138 2153523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:25:36.531151 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.534168 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534579 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.534603 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534703 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.534855 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.534995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.535105 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.723562 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:25:36.740697 2153523 node_ready.go:35] waiting up to 6m0s for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749268 2153523 node_ready.go:49] node "embed-certs-823470" has status "Ready":"True"
	I0819 13:25:36.749294 2153523 node_ready.go:38] duration metric: took 8.560448ms for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749306 2153523 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:36.756208 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:36.829190 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.845456 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:25:36.845487 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:25:36.857828 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.928557 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:25:36.928590 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:25:36.979832 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:36.979864 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:25:37.086105 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:37.498852 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.498909 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498924 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499189 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499210 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499218 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499225 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499393 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499392 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499405 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499419 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499426 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499762 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499774 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499771 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499761 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499815 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499835 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510580 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.510601 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.510894 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.510922 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510924 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.721765 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.721791 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722119 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722143 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722171 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.722180 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722502 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722513 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722525 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722538 2153523 addons.go:475] Verifying addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:37.724029 2153523 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:25:37.725177 2153523 addons.go:510] duration metric: took 1.258399079s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:25:38.763132 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:40.765558 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:44.175962 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:44.176268 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:43.263762 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:45.765551 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:46.265160 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.265189 2153523 pod_ready.go:82] duration metric: took 9.50894907s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.265202 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270706 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.270733 2153523 pod_ready.go:82] duration metric: took 5.521974ms for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270745 2153523 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275061 2153523 pod_ready.go:93] pod "etcd-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.275095 2153523 pod_ready.go:82] duration metric: took 4.331701ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275106 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280251 2153523 pod_ready.go:93] pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.280269 2153523 pod_ready.go:82] duration metric: took 5.157209ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280278 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284437 2153523 pod_ready.go:93] pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.284456 2153523 pod_ready.go:82] duration metric: took 4.173028ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284464 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660549 2153523 pod_ready.go:93] pod "kube-proxy-cn9sl" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.660577 2153523 pod_ready.go:82] duration metric: took 376.106459ms for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660588 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060405 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:47.060432 2153523 pod_ready.go:82] duration metric: took 399.837833ms for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060440 2153523 pod_ready.go:39] duration metric: took 10.311121311s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:47.060458 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:25:47.060513 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:25:47.076832 2153523 api_server.go:72] duration metric: took 10.610087948s to wait for apiserver process to appear ...
	I0819 13:25:47.076869 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:25:47.076894 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:25:47.081037 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:25:47.081960 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:25:47.081984 2153523 api_server.go:131] duration metric: took 5.106291ms to wait for apiserver health ...
	I0819 13:25:47.081993 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:25:47.264787 2153523 system_pods.go:59] 9 kube-system pods found
	I0819 13:25:47.264817 2153523 system_pods.go:61] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.264822 2153523 system_pods.go:61] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.264826 2153523 system_pods.go:61] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.264830 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.264835 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.264839 2153523 system_pods.go:61] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.264842 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.264849 2153523 system_pods.go:61] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.264853 2153523 system_pods.go:61] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.264863 2153523 system_pods.go:74] duration metric: took 182.863608ms to wait for pod list to return data ...
	I0819 13:25:47.264870 2153523 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:25:47.461938 2153523 default_sa.go:45] found service account: "default"
	I0819 13:25:47.461968 2153523 default_sa.go:55] duration metric: took 197.091755ms for default service account to be created ...
	I0819 13:25:47.461978 2153523 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:25:47.664070 2153523 system_pods.go:86] 9 kube-system pods found
	I0819 13:25:47.664101 2153523 system_pods.go:89] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.664106 2153523 system_pods.go:89] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.664110 2153523 system_pods.go:89] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.664115 2153523 system_pods.go:89] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.664119 2153523 system_pods.go:89] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.664122 2153523 system_pods.go:89] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.664125 2153523 system_pods.go:89] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.664131 2153523 system_pods.go:89] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.664135 2153523 system_pods.go:89] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.664146 2153523 system_pods.go:126] duration metric: took 202.162032ms to wait for k8s-apps to be running ...
	I0819 13:25:47.664157 2153523 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:25:47.664259 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:47.680000 2153523 system_svc.go:56] duration metric: took 15.832426ms WaitForService to wait for kubelet
	I0819 13:25:47.680036 2153523 kubeadm.go:582] duration metric: took 11.213303025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:25:47.680063 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:25:47.861730 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:25:47.861762 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:25:47.861774 2153523 node_conditions.go:105] duration metric: took 181.706418ms to run NodePressure ...
	I0819 13:25:47.861786 2153523 start.go:241] waiting for startup goroutines ...
	I0819 13:25:47.861793 2153523 start.go:246] waiting for cluster config update ...
	I0819 13:25:47.861804 2153523 start.go:255] writing updated cluster config ...
	I0819 13:25:47.862066 2153523 ssh_runner.go:195] Run: rm -f paused
	I0819 13:25:47.913161 2153523 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:25:47.915029 2153523 out.go:177] * Done! kubectl is now configured to use "embed-certs-823470" cluster and "default" namespace by default
	I0819 13:26:01.591066 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.135241188s)
	I0819 13:26:01.591140 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:01.620641 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:26:01.633612 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:01.647057 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:01.647085 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:01.647144 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:01.663839 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:01.663903 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:01.681570 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:01.692967 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:01.693038 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:01.714743 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.723974 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:01.724044 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.734044 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:01.743276 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:01.743326 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:01.752586 2151869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:01.798481 2151869 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:26:01.798564 2151869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:01.909646 2151869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:01.909801 2151869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:01.909923 2151869 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:26:01.923525 2151869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:01.925556 2151869 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:01.925670 2151869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:01.925760 2151869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:01.925900 2151869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:01.925991 2151869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:01.926102 2151869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:01.926162 2151869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:01.926248 2151869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:01.926334 2151869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:01.926449 2151869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:01.926567 2151869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:01.926632 2151869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:01.926707 2151869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:02.049009 2151869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:02.310249 2151869 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:26:02.425638 2151869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:02.664112 2151869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:03.165747 2151869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:03.166413 2151869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:03.169075 2151869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:03.170768 2151869 out.go:235]   - Booting up control plane ...
	I0819 13:26:03.170893 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:03.171003 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:03.171184 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:03.190081 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:03.197988 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:03.198058 2151869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:03.327052 2151869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:26:03.327180 2151869 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:26:04.332623 2151869 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002268296s
	I0819 13:26:04.332744 2151869 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:26:09.332759 2151869 kubeadm.go:310] [api-check] The API server is healthy after 5.001673024s
	I0819 13:26:09.344645 2151869 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:26:09.359806 2151869 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:26:09.393264 2151869 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:26:09.393499 2151869 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-441627 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:26:09.413036 2151869 kubeadm.go:310] [bootstrap-token] Using token: cva6zd.b3089a5fajgqfd6r
	I0819 13:26:09.414921 2151869 out.go:235]   - Configuring RBAC rules ...
	I0819 13:26:09.415128 2151869 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:26:09.420470 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:26:09.431168 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:26:09.438188 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:26:09.443406 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:26:09.450497 2151869 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:26:09.737815 2151869 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:26:10.165062 2151869 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:26:10.738017 2151869 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:26:10.739202 2151869 kubeadm.go:310] 
	I0819 13:26:10.739309 2151869 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:26:10.739321 2151869 kubeadm.go:310] 
	I0819 13:26:10.739415 2151869 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:26:10.739426 2151869 kubeadm.go:310] 
	I0819 13:26:10.739469 2151869 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:26:10.739558 2151869 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:26:10.739645 2151869 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:26:10.739662 2151869 kubeadm.go:310] 
	I0819 13:26:10.739746 2151869 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:26:10.739758 2151869 kubeadm.go:310] 
	I0819 13:26:10.739833 2151869 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:26:10.739842 2151869 kubeadm.go:310] 
	I0819 13:26:10.739926 2151869 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:26:10.740039 2151869 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:26:10.740116 2151869 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:26:10.740134 2151869 kubeadm.go:310] 
	I0819 13:26:10.740272 2151869 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:26:10.740376 2151869 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:26:10.740387 2151869 kubeadm.go:310] 
	I0819 13:26:10.740487 2151869 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.740651 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:26:10.740689 2151869 kubeadm.go:310] 	--control-plane 
	I0819 13:26:10.740699 2151869 kubeadm.go:310] 
	I0819 13:26:10.740808 2151869 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:26:10.740822 2151869 kubeadm.go:310] 
	I0819 13:26:10.740931 2151869 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.741086 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:26:10.743032 2151869 kubeadm.go:310] W0819 13:26:01.774566    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743353 2151869 kubeadm.go:310] W0819 13:26:01.775291    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743487 2151869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:10.743516 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:26:10.743530 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:26:10.745175 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:26:10.746234 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:26:10.756423 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:26:10.774266 2151869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:26:10.774332 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:10.774349 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-441627 minikube.k8s.io/updated_at=2024_08_19T13_26_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=no-preload-441627 minikube.k8s.io/primary=true
	I0819 13:26:10.800516 2151869 ops.go:34] apiserver oom_adj: -16
	I0819 13:26:10.991369 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.492114 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.992301 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.491433 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.991812 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.492015 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.991704 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.492129 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.992304 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:15.079535 2151869 kubeadm.go:1113] duration metric: took 4.305270284s to wait for elevateKubeSystemPrivileges
	I0819 13:26:15.079607 2151869 kubeadm.go:394] duration metric: took 5m1.9669805s to StartCluster
	I0819 13:26:15.079635 2151869 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.079759 2151869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:26:15.081957 2151869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.082240 2151869 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:26:15.082282 2151869 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:26:15.082374 2151869 addons.go:69] Setting storage-provisioner=true in profile "no-preload-441627"
	I0819 13:26:15.082413 2151869 addons.go:234] Setting addon storage-provisioner=true in "no-preload-441627"
	I0819 13:26:15.082409 2151869 addons.go:69] Setting metrics-server=true in profile "no-preload-441627"
	W0819 13:26:15.082425 2151869 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:26:15.082389 2151869 addons.go:69] Setting default-storageclass=true in profile "no-preload-441627"
	I0819 13:26:15.082449 2151869 addons.go:234] Setting addon metrics-server=true in "no-preload-441627"
	I0819 13:26:15.082461 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	W0819 13:26:15.082463 2151869 addons.go:243] addon metrics-server should already be in state true
	I0819 13:26:15.082479 2151869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-441627"
	I0819 13:26:15.082503 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.082514 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082869 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082920 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082925 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082947 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.083691 2151869 out.go:177] * Verifying Kubernetes components...
	I0819 13:26:15.085493 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:26:15.101376 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
	I0819 13:26:15.102091 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.102685 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.102703 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.103169 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.103181 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0819 13:26:15.103193 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0819 13:26:15.103361 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.103663 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.103732 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.104300 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104322 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.104710 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104730 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.105081 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105409 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105905 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.105938 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.106084 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.106119 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.107248 2151869 addons.go:234] Setting addon default-storageclass=true in "no-preload-441627"
	W0819 13:26:15.107271 2151869 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:26:15.107302 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.107654 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.107713 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.122709 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0819 13:26:15.123128 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.123869 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.123898 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.124416 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.124634 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.126217 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I0819 13:26:15.126791 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.127400 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.127419 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.127698 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.128022 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.128336 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.129232 2151869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:26:15.130058 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.130226 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:26:15.130243 2151869 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:26:15.130262 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.131335 2151869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:26:15.131652 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45447
	I0819 13:26:15.132227 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.132570 2151869 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.132586 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:26:15.132605 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.133517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134218 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.134244 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134600 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.134738 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.134755 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.134852 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.135030 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.135097 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.135284 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.135673 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.135702 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.136748 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137177 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.137197 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137442 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.137613 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.137765 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.137924 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.154937 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0819 13:26:15.155407 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.155835 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.155847 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.156238 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.156416 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.158575 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.158795 2151869 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.158805 2151869 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:26:15.158818 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.162078 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162519 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.162534 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162825 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.162977 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.163114 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.163198 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.309062 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:26:15.341525 2151869 node_ready.go:35] waiting up to 6m0s for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354270 2151869 node_ready.go:49] node "no-preload-441627" has status "Ready":"True"
	I0819 13:26:15.354307 2151869 node_ready.go:38] duration metric: took 12.742932ms for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354320 2151869 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:15.363385 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:15.427138 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.429228 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.517567 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:26:15.517597 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:26:15.626356 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:26:15.626393 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:26:15.677045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677072 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677414 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677471 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.677483 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677490 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677729 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677749 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.683924 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.683943 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.684224 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.684249 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.684252 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:15.701949 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:15.701980 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:26:15.743920 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:16.110007 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110035 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110393 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110415 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.110426 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110439 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110691 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110712 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426076 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426617 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.426639 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426650 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426660 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426669 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426948 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426994 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.427011 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.427024 2151869 addons.go:475] Verifying addon metrics-server=true in "no-preload-441627"
	I0819 13:26:16.428916 2151869 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 13:26:16.430020 2151869 addons.go:510] duration metric: took 1.347734507s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 13:26:17.369079 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:19.369662 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:21.369721 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:23.368863 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.368886 2151869 pod_ready.go:82] duration metric: took 8.005467546s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.368895 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373851 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.373873 2151869 pod_ready.go:82] duration metric: took 4.971429ms for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373884 2151869 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378381 2151869 pod_ready.go:93] pod "etcd-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.378405 2151869 pod_ready.go:82] duration metric: took 4.513601ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378417 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382700 2151869 pod_ready.go:93] pod "kube-apiserver-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.382719 2151869 pod_ready.go:82] duration metric: took 4.295583ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382728 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387145 2151869 pod_ready.go:93] pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.387168 2151869 pod_ready.go:82] duration metric: took 4.434097ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387179 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768694 2151869 pod_ready.go:93] pod "kube-proxy-9jhss" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.768722 2151869 pod_ready.go:82] duration metric: took 381.534902ms for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768735 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167861 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:24.167888 2151869 pod_ready.go:82] duration metric: took 399.144976ms for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167896 2151869 pod_ready.go:39] duration metric: took 8.81356327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:24.167913 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:26:24.167967 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:26:24.187913 2151869 api_server.go:72] duration metric: took 9.105632249s to wait for apiserver process to appear ...
	I0819 13:26:24.187938 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:26:24.187960 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:26:24.192758 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:26:24.193797 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:26:24.193822 2151869 api_server.go:131] duration metric: took 5.87693ms to wait for apiserver health ...
	I0819 13:26:24.193833 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:26:24.371261 2151869 system_pods.go:59] 9 kube-system pods found
	I0819 13:26:24.371291 2151869 system_pods.go:61] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.371296 2151869 system_pods.go:61] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.371300 2151869 system_pods.go:61] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.371304 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.371308 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.371312 2151869 system_pods.go:61] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.371317 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.371325 2151869 system_pods.go:61] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.371331 2151869 system_pods.go:61] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.371342 2151869 system_pods.go:74] duration metric: took 177.503046ms to wait for pod list to return data ...
	I0819 13:26:24.371355 2151869 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:26:24.568039 2151869 default_sa.go:45] found service account: "default"
	I0819 13:26:24.568068 2151869 default_sa.go:55] duration metric: took 196.702869ms for default service account to be created ...
	I0819 13:26:24.568078 2151869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:26:24.771705 2151869 system_pods.go:86] 9 kube-system pods found
	I0819 13:26:24.771734 2151869 system_pods.go:89] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.771741 2151869 system_pods.go:89] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.771745 2151869 system_pods.go:89] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.771749 2151869 system_pods.go:89] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.771753 2151869 system_pods.go:89] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.771756 2151869 system_pods.go:89] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.771759 2151869 system_pods.go:89] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.771765 2151869 system_pods.go:89] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.771770 2151869 system_pods.go:89] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.771781 2151869 system_pods.go:126] duration metric: took 203.697331ms to wait for k8s-apps to be running ...
	I0819 13:26:24.771794 2151869 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:26:24.771837 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.788088 2151869 system_svc.go:56] duration metric: took 16.285619ms WaitForService to wait for kubelet
	I0819 13:26:24.788122 2151869 kubeadm.go:582] duration metric: took 9.705841529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:26:24.788157 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:26:24.968341 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:26:24.968377 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:26:24.968392 2151869 node_conditions.go:105] duration metric: took 180.211418ms to run NodePressure ...
	I0819 13:26:24.968409 2151869 start.go:241] waiting for startup goroutines ...
	I0819 13:26:24.968418 2151869 start.go:246] waiting for cluster config update ...
	I0819 13:26:24.968429 2151869 start.go:255] writing updated cluster config ...
	I0819 13:26:24.968747 2151869 ssh_runner.go:195] Run: rm -f paused
	I0819 13:26:25.025595 2151869 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:26:25.027536 2151869 out.go:177] * Done! kubectl is now configured to use "no-preload-441627" cluster and "default" namespace by default
	I0819 13:26:24.177907 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:26:24.178201 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:26:24.178214 2152780 kubeadm.go:310] 
	I0819 13:26:24.178291 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:26:24.178372 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:26:24.178382 2152780 kubeadm.go:310] 
	I0819 13:26:24.178424 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:26:24.178476 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:26:24.178656 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:26:24.178674 2152780 kubeadm.go:310] 
	I0819 13:26:24.178792 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:26:24.178834 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:26:24.178879 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:26:24.178885 2152780 kubeadm.go:310] 
	I0819 13:26:24.179039 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:26:24.179150 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:26:24.179160 2152780 kubeadm.go:310] 
	I0819 13:26:24.179343 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:26:24.179495 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:26:24.179612 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:26:24.179737 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:26:24.179753 2152780 kubeadm.go:310] 
	I0819 13:26:24.180394 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:24.180518 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:26:24.180601 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 13:26:24.180756 2152780 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 13:26:24.180807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:26:24.684112 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.697957 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:24.707579 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:24.707600 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:24.707649 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:24.717570 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:24.717642 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:24.727069 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:24.735692 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:24.735757 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:24.744522 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.753036 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:24.753090 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.763455 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:24.772780 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:24.772839 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:24.782898 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:24.856625 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:26:24.856732 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:25.003965 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:25.004118 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:25.004294 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:26:25.205556 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:25.207347 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:25.207462 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:25.207553 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:25.207671 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:25.207759 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:25.207840 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:25.207910 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:25.207978 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:25.208063 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:25.208526 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:25.209357 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:25.209668 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:25.209816 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:25.424000 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:25.667776 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:25.723895 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:25.817349 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:25.834327 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:25.836860 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:25.837016 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:25.992443 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:25.994085 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:26:25.994223 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:26.007880 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:26.009553 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:26.013310 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:26.014264 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:27:06.016907 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:27:06.017042 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:06.017274 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:11.017468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:11.017758 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:21.018328 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:21.018585 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:41.019394 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:41.019632 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018489 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:28:21.018784 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018807 2152780 kubeadm.go:310] 
	I0819 13:28:21.018879 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:28:21.018963 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:28:21.018971 2152780 kubeadm.go:310] 
	I0819 13:28:21.019000 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:28:21.019030 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:28:21.019122 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:28:21.019130 2152780 kubeadm.go:310] 
	I0819 13:28:21.019233 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:28:21.019276 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:28:21.019329 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:28:21.019337 2152780 kubeadm.go:310] 
	I0819 13:28:21.019471 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:28:21.019607 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:28:21.019620 2152780 kubeadm.go:310] 
	I0819 13:28:21.019776 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:28:21.019895 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:28:21.020016 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:28:21.020125 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:28:21.020139 2152780 kubeadm.go:310] 
	I0819 13:28:21.020943 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:28:21.021058 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:28:21.021158 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 13:28:21.021292 2152780 kubeadm.go:394] duration metric: took 7m56.942541209s to StartCluster
	I0819 13:28:21.021375 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:28:21.021456 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:28:21.068572 2152780 cri.go:89] found id: ""
	I0819 13:28:21.068613 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.068624 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:28:21.068633 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:28:21.068704 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:28:21.108484 2152780 cri.go:89] found id: ""
	I0819 13:28:21.108514 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.108532 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:28:21.108540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:28:21.108611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:28:21.151863 2152780 cri.go:89] found id: ""
	I0819 13:28:21.151894 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.151905 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:28:21.151914 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:28:21.151992 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:28:21.191844 2152780 cri.go:89] found id: ""
	I0819 13:28:21.191873 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.191883 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:28:21.191891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:28:21.191970 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:28:21.230545 2152780 cri.go:89] found id: ""
	I0819 13:28:21.230580 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.230588 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:28:21.230595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:28:21.230650 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:28:21.264157 2152780 cri.go:89] found id: ""
	I0819 13:28:21.264211 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.264225 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:28:21.264233 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:28:21.264301 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:28:21.297523 2152780 cri.go:89] found id: ""
	I0819 13:28:21.297558 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.297566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:28:21.297573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:28:21.297630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:28:21.336765 2152780 cri.go:89] found id: ""
	I0819 13:28:21.336793 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.336802 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:28:21.336815 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:28:21.336839 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:28:21.387668 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:28:21.387711 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:28:21.402131 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:28:21.402173 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:28:21.473587 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:28:21.473615 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:28:21.473632 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:28:21.576567 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:28:21.576610 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 13:28:21.614859 2152780 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 13:28:21.614919 2152780 out.go:270] * 
	W0819 13:28:21.615011 2152780 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.615032 2152780 out.go:270] * 
	W0819 13:28:21.615837 2152780 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:28:21.619494 2152780 out.go:201] 
	W0819 13:28:21.620743 2152780 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.620793 2152780 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 13:28:21.620813 2152780 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 13:28:21.622143 2152780 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 13:34:49 embed-certs-823470 crio[728]: time="2024-08-19 13:34:49.941864322Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:39a67d1ffda30ecf0b26779c9b79172e7e17c42d10ce5e9d78fa1b9d31610db8,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-95hnp,Uid:3600e8be-6152-427e-b049-52ff758d6b44,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724073937979834144,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-95hnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3600e8be-6152-427e-b049-52ff758d6b44,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T13:25:36.169635244Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e24249efddc1dbb0167bf3189e04aada650ffc25ab11b95e6cb7e98a15f80746,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-nwzh6,Uid:514f2cfd-88a1-4a1f-be53-4c21a33e9f3b,Namespace:kube-s
ystem,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724073937952175801,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-nwzh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 514f2cfd-88a1-4a1f-be53-4c21a33e9f3b,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T13:25:36.144070999Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9033a5c25eaac26efe68a7a1c0330316572d387b6fa8873b726066eed2afcff6,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-hkt29,Uid:8280db9d-3e31-4689-a2bc-fdcc8ca7f129,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724073937888109172,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-hkt29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8280db9d-3e31-4689-a2bc-fdcc8ca7f129,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:m
ap[string]string{kubernetes.io/config.seen: 2024-08-19T13:25:37.579463430Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5604ce01b5e698f635e75403302a0446327cf787e862172e920175ec7f1b5cac,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724073937771194243,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":
[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-19T13:25:37.465034633Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0897f86c472a4bf5c6035de3bb2ba7193c8fd026f3580da98f7220a4b10af1d9,Metadata:&PodSandboxMetadata{Name:kube-proxy-cn9sl,Uid:d1d8735d-9726-4b29-9e25-ff427831e523,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724073936205903882,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-cn9sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d8735d-9726-4b29-9e25-ff427831e523,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T13:25:35.888464187Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e6eba311f6f86419b386650a1c324897dc03af1d2c55874267633967e55e525e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-823470,Uid:522e1b583833ac0f159fedcfa268e99a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724073925415650866,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e1b583833ac0f159fedcfa268e99a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 522e1b583833ac0f159fedcfa268e99a,kubernetes.io/config.seen: 2024-08-19T13:25:24.962641501Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:efb591bb60aa98ccf57f182759a360cdf56b2c6a8448451d45e6f00a0a2c25b1,Metadata:&PodSandboxM
etadata{Name:kube-apiserver-embed-certs-823470,Uid:7c885f0b2cbb96f0f93abd846852daa6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724073925405385237,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.50:8443,kubernetes.io/config.hash: 7c885f0b2cbb96f0f93abd846852daa6,kubernetes.io/config.seen: 2024-08-19T13:25:24.962640144Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3dac58c1da5f417f9439f94271c16e1179831244f6bf9e8e81b063e2125aad9a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-823470,Uid:313639c8d25b086f943d9287cedb068f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724073925405027367,Labels:map[string]string{component: kub
e-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313639c8d25b086f943d9287cedb068f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 313639c8d25b086f943d9287cedb068f,kubernetes.io/config.seen: 2024-08-19T13:25:24.962643503Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1fb5c394291b63d991029604e10f2b4c9875a763474ff727d72b515c52bd782c,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-823470,Uid:7c9892b9079966050e46e07583998bab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724073925398228527,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9892b9079966050e46e07583998bab,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61
.50:2379,kubernetes.io/config.hash: 7c9892b9079966050e46e07583998bab,kubernetes.io/config.seen: 2024-08-19T13:25:24.962636563Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:807d476f2ae267751c2eaad333e4c2e8339f86022aa607d5a728856b76cd7616,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-823470,Uid:7c885f0b2cbb96f0f93abd846852daa6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724073643337464898,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.50:8443,kubernetes.io/config.hash: 7c885f0b2cbb96f0f93abd846852daa6,kubernetes.io/config.seen: 2024-08-19T13:20:42.830466366Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collect
or/interceptors.go:74" id=ed20406c-7b40-49db-9ec2-87364d488ead name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 13:34:49 embed-certs-823470 crio[728]: time="2024-08-19 13:34:49.942549301Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c73e9258-dc31-4dbb-9fe4-76e4d4de794b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:34:49 embed-certs-823470 crio[728]: time="2024-08-19 13:34:49.942599502Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c73e9258-dc31-4dbb-9fe4-76e4d4de794b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:34:49 embed-certs-823470 crio[728]: time="2024-08-19 13:34:49.942852803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce37bfb8cdce0afffb49c149eb226f5c4766eeeec26f1da39937debefc27e17e,PodSandboxId:39a67d1ffda30ecf0b26779c9b79172e7e17c42d10ce5e9d78fa1b9d31610db8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938447971748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95hnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3600e8be-6152-427e-b049-52ff758d6b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3798b17b161f5b1af75c78352d94e35fbbbdceefffe887465e356851caab671b,PodSandboxId:e24249efddc1dbb0167bf3189e04aada650ffc25ab11b95e6cb7e98a15f80746,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938392591354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nwzh6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 514f2cfd-88a1-4a1f-be53-4c21a33e9f3b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b117aabf2cf090344b6d163058bf9a4f7af5a7011f40e3b2c84dfc72c1ebc1d,PodSandboxId:5604ce01b5e698f635e75403302a0446327cf787e862172e920175ec7f1b5cac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1724073937863833017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:649e9a31e05f140e89652fac3586255fe09d21be2cd1a7692a37b54947537100,PodSandboxId:0897f86c472a4bf5c6035de3bb2ba7193c8fd026f3580da98f7220a4b10af1d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724073936334375499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cn9sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d8735d-9726-4b29-9e25-ff427831e523,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807d943675d96e3c512c003d16f2f826569a36466eb14842c6d3494849313fe5,PodSandboxId:3dac58c1da5f417f9439f94271c16e1179831244f6bf9e8e81b063e2125aad9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073925666149723,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313639c8d25b086f943d9287cedb068f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa09432aecb2a2b37ba05dad4d5808366acafac98896d466c5aafe664990129,PodSandboxId:e6eba311f6f86419b386650a1c324897dc03af1d2c55874267633967e55e525e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073925612013236,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e1b583833ac0f159fedcfa268e99a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864097e3781d0c4a5b1508c65507fa7c3d87fa7293966bcffd9047cacb6c8e7b,PodSandboxId:1fb5c394291b63d991029604e10f2b4c9875a763474ff727d72b515c52bd782c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073925609198124,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9892b9079966050e46e07583998bab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fbc22569fbb35ce9edabdaebe2e2635bd4621399a05e98d1522c0fb6ef442b,PodSandboxId:efb591bb60aa98ccf57f182759a360cdf56b2c6a8448451d45e6f00a0a2c25b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073925573871823,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19477ce7f6a1b08763711c8683d785c97a54166454d06388f00ef69316ef70d3,PodSandboxId:807d476f2ae267751c2eaad333e4c2e8339f86022aa607d5a728856b76cd7616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073643641761945,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c73e9258-dc31-4dbb-9fe4-76e4d4de794b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:34:49 embed-certs-823470 crio[728]: time="2024-08-19 13:34:49.965915785Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d61cd2f1-ac72-4182-bc37-026eb4de1e64 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:34:49 embed-certs-823470 crio[728]: time="2024-08-19 13:34:49.966033393Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d61cd2f1-ac72-4182-bc37-026eb4de1e64 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:34:49 embed-certs-823470 crio[728]: time="2024-08-19 13:34:49.967269054Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e3129cd-0da6-4fb6-8e10-0f71dc6d4203 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:34:49 embed-certs-823470 crio[728]: time="2024-08-19 13:34:49.967904272Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074489967879841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e3129cd-0da6-4fb6-8e10-0f71dc6d4203 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:34:49 embed-certs-823470 crio[728]: time="2024-08-19 13:34:49.968752071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74db9aed-3496-4759-948e-6a8fd2cb0d37 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:34:49 embed-certs-823470 crio[728]: time="2024-08-19 13:34:49.968851982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74db9aed-3496-4759-948e-6a8fd2cb0d37 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:34:49 embed-certs-823470 crio[728]: time="2024-08-19 13:34:49.969105216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce37bfb8cdce0afffb49c149eb226f5c4766eeeec26f1da39937debefc27e17e,PodSandboxId:39a67d1ffda30ecf0b26779c9b79172e7e17c42d10ce5e9d78fa1b9d31610db8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938447971748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95hnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3600e8be-6152-427e-b049-52ff758d6b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3798b17b161f5b1af75c78352d94e35fbbbdceefffe887465e356851caab671b,PodSandboxId:e24249efddc1dbb0167bf3189e04aada650ffc25ab11b95e6cb7e98a15f80746,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938392591354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nwzh6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 514f2cfd-88a1-4a1f-be53-4c21a33e9f3b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b117aabf2cf090344b6d163058bf9a4f7af5a7011f40e3b2c84dfc72c1ebc1d,PodSandboxId:5604ce01b5e698f635e75403302a0446327cf787e862172e920175ec7f1b5cac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1724073937863833017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:649e9a31e05f140e89652fac3586255fe09d21be2cd1a7692a37b54947537100,PodSandboxId:0897f86c472a4bf5c6035de3bb2ba7193c8fd026f3580da98f7220a4b10af1d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724073936334375499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cn9sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d8735d-9726-4b29-9e25-ff427831e523,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807d943675d96e3c512c003d16f2f826569a36466eb14842c6d3494849313fe5,PodSandboxId:3dac58c1da5f417f9439f94271c16e1179831244f6bf9e8e81b063e2125aad9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073925666149723,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313639c8d25b086f943d9287cedb068f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa09432aecb2a2b37ba05dad4d5808366acafac98896d466c5aafe664990129,PodSandboxId:e6eba311f6f86419b386650a1c324897dc03af1d2c55874267633967e55e525e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073925612013236,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e1b583833ac0f159fedcfa268e99a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864097e3781d0c4a5b1508c65507fa7c3d87fa7293966bcffd9047cacb6c8e7b,PodSandboxId:1fb5c394291b63d991029604e10f2b4c9875a763474ff727d72b515c52bd782c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073925609198124,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9892b9079966050e46e07583998bab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fbc22569fbb35ce9edabdaebe2e2635bd4621399a05e98d1522c0fb6ef442b,PodSandboxId:efb591bb60aa98ccf57f182759a360cdf56b2c6a8448451d45e6f00a0a2c25b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073925573871823,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19477ce7f6a1b08763711c8683d785c97a54166454d06388f00ef69316ef70d3,PodSandboxId:807d476f2ae267751c2eaad333e4c2e8339f86022aa607d5a728856b76cd7616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073643641761945,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=74db9aed-3496-4759-948e-6a8fd2cb0d37 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:34:50 embed-certs-823470 crio[728]: time="2024-08-19 13:34:50.021613144Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70f43730-9a2e-4274-80bb-9e1c87231f4d name=/runtime.v1.RuntimeService/Version
	Aug 19 13:34:50 embed-certs-823470 crio[728]: time="2024-08-19 13:34:50.021739776Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70f43730-9a2e-4274-80bb-9e1c87231f4d name=/runtime.v1.RuntimeService/Version
	Aug 19 13:34:50 embed-certs-823470 crio[728]: time="2024-08-19 13:34:50.023427010Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d491778-fc0d-4658-b41b-b1533d73a8c5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:34:50 embed-certs-823470 crio[728]: time="2024-08-19 13:34:50.024890958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074490024853450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d491778-fc0d-4658-b41b-b1533d73a8c5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:34:50 embed-certs-823470 crio[728]: time="2024-08-19 13:34:50.025572804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eaf3d6cd-0fa7-45a3-a937-2c1ef4ca78f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:34:50 embed-certs-823470 crio[728]: time="2024-08-19 13:34:50.025719069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eaf3d6cd-0fa7-45a3-a937-2c1ef4ca78f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:34:50 embed-certs-823470 crio[728]: time="2024-08-19 13:34:50.025962996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce37bfb8cdce0afffb49c149eb226f5c4766eeeec26f1da39937debefc27e17e,PodSandboxId:39a67d1ffda30ecf0b26779c9b79172e7e17c42d10ce5e9d78fa1b9d31610db8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938447971748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95hnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3600e8be-6152-427e-b049-52ff758d6b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3798b17b161f5b1af75c78352d94e35fbbbdceefffe887465e356851caab671b,PodSandboxId:e24249efddc1dbb0167bf3189e04aada650ffc25ab11b95e6cb7e98a15f80746,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938392591354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nwzh6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 514f2cfd-88a1-4a1f-be53-4c21a33e9f3b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b117aabf2cf090344b6d163058bf9a4f7af5a7011f40e3b2c84dfc72c1ebc1d,PodSandboxId:5604ce01b5e698f635e75403302a0446327cf787e862172e920175ec7f1b5cac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1724073937863833017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:649e9a31e05f140e89652fac3586255fe09d21be2cd1a7692a37b54947537100,PodSandboxId:0897f86c472a4bf5c6035de3bb2ba7193c8fd026f3580da98f7220a4b10af1d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724073936334375499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cn9sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d8735d-9726-4b29-9e25-ff427831e523,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807d943675d96e3c512c003d16f2f826569a36466eb14842c6d3494849313fe5,PodSandboxId:3dac58c1da5f417f9439f94271c16e1179831244f6bf9e8e81b063e2125aad9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073925666149723,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313639c8d25b086f943d9287cedb068f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa09432aecb2a2b37ba05dad4d5808366acafac98896d466c5aafe664990129,PodSandboxId:e6eba311f6f86419b386650a1c324897dc03af1d2c55874267633967e55e525e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073925612013236,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e1b583833ac0f159fedcfa268e99a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864097e3781d0c4a5b1508c65507fa7c3d87fa7293966bcffd9047cacb6c8e7b,PodSandboxId:1fb5c394291b63d991029604e10f2b4c9875a763474ff727d72b515c52bd782c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073925609198124,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9892b9079966050e46e07583998bab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fbc22569fbb35ce9edabdaebe2e2635bd4621399a05e98d1522c0fb6ef442b,PodSandboxId:efb591bb60aa98ccf57f182759a360cdf56b2c6a8448451d45e6f00a0a2c25b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073925573871823,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19477ce7f6a1b08763711c8683d785c97a54166454d06388f00ef69316ef70d3,PodSandboxId:807d476f2ae267751c2eaad333e4c2e8339f86022aa607d5a728856b76cd7616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073643641761945,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eaf3d6cd-0fa7-45a3-a937-2c1ef4ca78f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:34:50 embed-certs-823470 crio[728]: time="2024-08-19 13:34:50.061200822Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d925b9d4-fcf0-4442-8536-8fb09af94fd1 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:34:50 embed-certs-823470 crio[728]: time="2024-08-19 13:34:50.061272751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d925b9d4-fcf0-4442-8536-8fb09af94fd1 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:34:50 embed-certs-823470 crio[728]: time="2024-08-19 13:34:50.062478861Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c94c2578-cc53-40e7-a034-1761357bc171 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:34:50 embed-certs-823470 crio[728]: time="2024-08-19 13:34:50.063035750Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074490063011179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c94c2578-cc53-40e7-a034-1761357bc171 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:34:50 embed-certs-823470 crio[728]: time="2024-08-19 13:34:50.063487684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9288a188-daf9-4ef4-ad77-0f6ce3e7c333 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:34:50 embed-certs-823470 crio[728]: time="2024-08-19 13:34:50.063541652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9288a188-daf9-4ef4-ad77-0f6ce3e7c333 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:34:50 embed-certs-823470 crio[728]: time="2024-08-19 13:34:50.064115088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce37bfb8cdce0afffb49c149eb226f5c4766eeeec26f1da39937debefc27e17e,PodSandboxId:39a67d1ffda30ecf0b26779c9b79172e7e17c42d10ce5e9d78fa1b9d31610db8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938447971748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95hnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3600e8be-6152-427e-b049-52ff758d6b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3798b17b161f5b1af75c78352d94e35fbbbdceefffe887465e356851caab671b,PodSandboxId:e24249efddc1dbb0167bf3189e04aada650ffc25ab11b95e6cb7e98a15f80746,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938392591354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nwzh6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 514f2cfd-88a1-4a1f-be53-4c21a33e9f3b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b117aabf2cf090344b6d163058bf9a4f7af5a7011f40e3b2c84dfc72c1ebc1d,PodSandboxId:5604ce01b5e698f635e75403302a0446327cf787e862172e920175ec7f1b5cac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1724073937863833017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:649e9a31e05f140e89652fac3586255fe09d21be2cd1a7692a37b54947537100,PodSandboxId:0897f86c472a4bf5c6035de3bb2ba7193c8fd026f3580da98f7220a4b10af1d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724073936334375499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cn9sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d8735d-9726-4b29-9e25-ff427831e523,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807d943675d96e3c512c003d16f2f826569a36466eb14842c6d3494849313fe5,PodSandboxId:3dac58c1da5f417f9439f94271c16e1179831244f6bf9e8e81b063e2125aad9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073925666149723,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313639c8d25b086f943d9287cedb068f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa09432aecb2a2b37ba05dad4d5808366acafac98896d466c5aafe664990129,PodSandboxId:e6eba311f6f86419b386650a1c324897dc03af1d2c55874267633967e55e525e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073925612013236,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e1b583833ac0f159fedcfa268e99a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864097e3781d0c4a5b1508c65507fa7c3d87fa7293966bcffd9047cacb6c8e7b,PodSandboxId:1fb5c394291b63d991029604e10f2b4c9875a763474ff727d72b515c52bd782c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073925609198124,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9892b9079966050e46e07583998bab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fbc22569fbb35ce9edabdaebe2e2635bd4621399a05e98d1522c0fb6ef442b,PodSandboxId:efb591bb60aa98ccf57f182759a360cdf56b2c6a8448451d45e6f00a0a2c25b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073925573871823,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19477ce7f6a1b08763711c8683d785c97a54166454d06388f00ef69316ef70d3,PodSandboxId:807d476f2ae267751c2eaad333e4c2e8339f86022aa607d5a728856b76cd7616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073643641761945,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9288a188-daf9-4ef4-ad77-0f6ce3e7c333 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ce37bfb8cdce0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   39a67d1ffda30       coredns-6f6b679f8f-95hnp
	3798b17b161f5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   e24249efddc1d       coredns-6f6b679f8f-nwzh6
	1b117aabf2cf0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   5604ce01b5e69       storage-provisioner
	649e9a31e05f1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   0897f86c472a4       kube-proxy-cn9sl
	807d943675d96       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   3dac58c1da5f4       kube-scheduler-embed-certs-823470
	6fa09432aecb2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   e6eba311f6f86       kube-controller-manager-embed-certs-823470
	864097e3781d0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   1fb5c394291b6       etcd-embed-certs-823470
	73fbc22569fbb       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   efb591bb60aa9       kube-apiserver-embed-certs-823470
	19477ce7f6a1b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   807d476f2ae26       kube-apiserver-embed-certs-823470
	
	
	==> coredns [3798b17b161f5b1af75c78352d94e35fbbbdceefffe887465e356851caab671b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ce37bfb8cdce0afffb49c149eb226f5c4766eeeec26f1da39937debefc27e17e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-823470
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-823470
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=embed-certs-823470
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T13_25_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 13:25:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-823470
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 13:34:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 13:30:47 +0000   Mon, 19 Aug 2024 13:25:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 13:30:47 +0000   Mon, 19 Aug 2024 13:25:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 13:30:47 +0000   Mon, 19 Aug 2024 13:25:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 13:30:47 +0000   Mon, 19 Aug 2024 13:25:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.50
	  Hostname:    embed-certs-823470
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 74fdc2cbc3ab4025aaf85ade139f8d77
	  System UUID:                74fdc2cb-c3ab-4025-aaf8-5ade139f8d77
	  Boot ID:                    540c561b-6bfd-49ea-8248-6fdf75e50995
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-95hnp                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 coredns-6f6b679f8f-nwzh6                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 etcd-embed-certs-823470                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m20s
	  kube-system                 kube-apiserver-embed-certs-823470             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-823470    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-proxy-cn9sl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-scheduler-embed-certs-823470             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 metrics-server-6867b74b74-hkt29               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m13s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m13s  kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node embed-certs-823470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node embed-certs-823470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node embed-certs-823470 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m16s  node-controller  Node embed-certs-823470 event: Registered Node embed-certs-823470 in Controller
	
	
	==> dmesg <==
	[  +0.048452] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.082771] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.594618] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.600438] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.405228] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.056785] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056539] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.180386] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.135179] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.284597] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.397684] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.060905] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.032226] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +5.633974] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.809031] kauditd_printk_skb: 52 callbacks suppressed
	[Aug19 13:21] kauditd_printk_skb: 33 callbacks suppressed
	[Aug19 13:25] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.193737] systemd-fstab-generator[2572]: Ignoring "noauto" option for root device
	[  +4.475724] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.578120] systemd-fstab-generator[2895]: Ignoring "noauto" option for root device
	[  +5.671622] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.280785] systemd-fstab-generator[3079]: Ignoring "noauto" option for root device
	[  +9.255645] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [864097e3781d0c4a5b1508c65507fa7c3d87fa7293966bcffd9047cacb6c8e7b] <==
	{"level":"info","ts":"2024-08-19T13:25:25.976079Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T13:25:25.977938Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"eeb5a3f186171c82","initial-advertise-peer-urls":["https://192.168.61.50:2380"],"listen-peer-urls":["https://192.168.61.50:2380"],"advertise-client-urls":["https://192.168.61.50:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.50:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T13:25:25.978005Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T13:25:25.978370Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.50:2380"}
	{"level":"info","ts":"2024-08-19T13:25:25.980747Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.50:2380"}
	{"level":"info","ts":"2024-08-19T13:25:26.703719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eeb5a3f186171c82 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-19T13:25:26.703856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eeb5a3f186171c82 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-19T13:25:26.703896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eeb5a3f186171c82 received MsgPreVoteResp from eeb5a3f186171c82 at term 1"}
	{"level":"info","ts":"2024-08-19T13:25:26.703926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eeb5a3f186171c82 became candidate at term 2"}
	{"level":"info","ts":"2024-08-19T13:25:26.703951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eeb5a3f186171c82 received MsgVoteResp from eeb5a3f186171c82 at term 2"}
	{"level":"info","ts":"2024-08-19T13:25:26.703977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eeb5a3f186171c82 became leader at term 2"}
	{"level":"info","ts":"2024-08-19T13:25:26.704002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: eeb5a3f186171c82 elected leader eeb5a3f186171c82 at term 2"}
	{"level":"info","ts":"2024-08-19T13:25:26.707821Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:25:26.709968Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"eeb5a3f186171c82","local-member-attributes":"{Name:embed-certs-823470 ClientURLs:[https://192.168.61.50:2379]}","request-path":"/0/members/eeb5a3f186171c82/attributes","cluster-id":"4c0bf4743f00d511","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T13:25:26.710132Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:25:26.710437Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4c0bf4743f00d511","local-member-id":"eeb5a3f186171c82","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:25:26.710527Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:25:26.712712Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:25:26.712768Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T13:25:26.712793Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T13:25:26.712836Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:25:26.713800Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:25:26.718336Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T13:25:26.723260Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:25:26.724028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.50:2379"}
	
	
	==> kernel <==
	 13:34:50 up 14 min,  0 users,  load average: 0.40, 0.17, 0.09
	Linux embed-certs-823470 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [19477ce7f6a1b08763711c8683d785c97a54166454d06388f00ef69316ef70d3] <==
	W0819 13:25:19.946455       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:19.957968       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:19.983522       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:19.987881       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:20.012504       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:20.038074       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:20.110390       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:20.125244       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:20.145296       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:20.254868       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.400133       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.491577       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.753855       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.785066       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.908891       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.924405       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.952051       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.987101       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:23.044984       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:23.103064       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:23.107570       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:23.171573       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:23.172961       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:23.199947       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:23.225501       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [73fbc22569fbb35ce9edabdaebe2e2635bd4621399a05e98d1522c0fb6ef442b] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 13:30:29.286359       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:30:29.286400       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 13:30:29.287436       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:30:29.287505       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 13:31:29.287819       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:31:29.287889       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 13:31:29.287925       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:31:29.287956       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 13:31:29.289041       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:31:29.289031       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 13:33:29.289815       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:33:29.289956       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 13:33:29.289847       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:33:29.290007       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 13:33:29.291318       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:33:29.291357       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6fa09432aecb2a2b37ba05dad4d5808366acafac98896d466c5aafe664990129] <==
	E0819 13:29:35.283401       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:29:35.724436       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:30:05.292130       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:30:05.733416       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:30:35.299123       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:30:35.742977       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 13:30:47.668853       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-823470"
	E0819 13:31:05.304491       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:31:05.752102       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:31:35.311457       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:31:35.759451       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 13:31:37.876346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="297.739µs"
	I0819 13:31:51.871156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="58.927µs"
	E0819 13:32:05.318268       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:32:05.767549       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:32:35.324950       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:32:35.776146       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:33:05.333019       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:33:05.784908       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:33:35.340612       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:33:35.794297       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:34:05.350940       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:34:05.803878       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:34:35.358731       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:34:35.812069       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [649e9a31e05f140e89652fac3586255fe09d21be2cd1a7692a37b54947537100] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 13:25:36.625124       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 13:25:36.636535       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.50"]
	E0819 13:25:36.636616       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 13:25:36.706916       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 13:25:36.706978       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 13:25:36.707011       1 server_linux.go:169] "Using iptables Proxier"
	I0819 13:25:36.712316       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 13:25:36.712555       1 server.go:483] "Version info" version="v1.31.0"
	I0819 13:25:36.712584       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:25:36.717426       1 config.go:197] "Starting service config controller"
	I0819 13:25:36.717746       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 13:25:36.717848       1 config.go:104] "Starting endpoint slice config controller"
	I0819 13:25:36.717870       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 13:25:36.718360       1 config.go:326] "Starting node config controller"
	I0819 13:25:36.719743       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 13:25:36.818786       1 shared_informer.go:320] Caches are synced for service config
	I0819 13:25:36.818775       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 13:25:36.820233       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [807d943675d96e3c512c003d16f2f826569a36466eb14842c6d3494849313fe5] <==
	W0819 13:25:29.175987       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:25:29.176056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.176154       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 13:25:29.176196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.238457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 13:25:29.238511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.240233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 13:25:29.240809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.256893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 13:25:29.256939       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.295443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 13:25:29.295761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.307879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 13:25:29.307938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.333980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 13:25:29.334153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.434387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 13:25:29.434439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.532195       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:25:29.532250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.554337       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 13:25:29.554392       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 13:25:29.598174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 13:25:29.598225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 13:25:31.427272       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 13:33:41 embed-certs-823470 kubelet[2902]: E0819 13:33:41.039326    2902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074421039034345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:33:41 embed-certs-823470 kubelet[2902]: E0819 13:33:41.039357    2902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074421039034345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:33:46 embed-certs-823470 kubelet[2902]: E0819 13:33:46.858756    2902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hkt29" podUID="8280db9d-3e31-4689-a2bc-fdcc8ca7f129"
	Aug 19 13:33:51 embed-certs-823470 kubelet[2902]: E0819 13:33:51.041062    2902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074431040484561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:33:51 embed-certs-823470 kubelet[2902]: E0819 13:33:51.041604    2902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074431040484561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:33:58 embed-certs-823470 kubelet[2902]: E0819 13:33:58.859223    2902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hkt29" podUID="8280db9d-3e31-4689-a2bc-fdcc8ca7f129"
	Aug 19 13:34:01 embed-certs-823470 kubelet[2902]: E0819 13:34:01.043854    2902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074441043042648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:01 embed-certs-823470 kubelet[2902]: E0819 13:34:01.044381    2902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074441043042648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:09 embed-certs-823470 kubelet[2902]: E0819 13:34:09.857332    2902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hkt29" podUID="8280db9d-3e31-4689-a2bc-fdcc8ca7f129"
	Aug 19 13:34:11 embed-certs-823470 kubelet[2902]: E0819 13:34:11.046200    2902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074451045779716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:11 embed-certs-823470 kubelet[2902]: E0819 13:34:11.046222    2902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074451045779716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:21 embed-certs-823470 kubelet[2902]: E0819 13:34:21.050254    2902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074461049942207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:21 embed-certs-823470 kubelet[2902]: E0819 13:34:21.050621    2902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074461049942207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:23 embed-certs-823470 kubelet[2902]: E0819 13:34:23.857570    2902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hkt29" podUID="8280db9d-3e31-4689-a2bc-fdcc8ca7f129"
	Aug 19 13:34:30 embed-certs-823470 kubelet[2902]: E0819 13:34:30.880558    2902 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 13:34:30 embed-certs-823470 kubelet[2902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 13:34:30 embed-certs-823470 kubelet[2902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 13:34:30 embed-certs-823470 kubelet[2902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 13:34:30 embed-certs-823470 kubelet[2902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 13:34:31 embed-certs-823470 kubelet[2902]: E0819 13:34:31.052168    2902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074471051756451,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:31 embed-certs-823470 kubelet[2902]: E0819 13:34:31.052195    2902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074471051756451,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:36 embed-certs-823470 kubelet[2902]: E0819 13:34:36.857650    2902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hkt29" podUID="8280db9d-3e31-4689-a2bc-fdcc8ca7f129"
	Aug 19 13:34:41 embed-certs-823470 kubelet[2902]: E0819 13:34:41.053563    2902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074481053114550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:41 embed-certs-823470 kubelet[2902]: E0819 13:34:41.053653    2902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074481053114550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:47 embed-certs-823470 kubelet[2902]: E0819 13:34:47.857842    2902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hkt29" podUID="8280db9d-3e31-4689-a2bc-fdcc8ca7f129"
	
	
	==> storage-provisioner [1b117aabf2cf090344b6d163058bf9a4f7af5a7011f40e3b2c84dfc72c1ebc1d] <==
	I0819 13:25:38.070064       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 13:25:38.124239       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 13:25:38.125420       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 13:25:38.140262       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 13:25:38.140439       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-823470_96ffcd4e-b568-448b-b18e-68aac759b662!
	I0819 13:25:38.141472       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0828c11d-0ca3-4e25-9d27-447d187659cf", APIVersion:"v1", ResourceVersion:"386", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-823470_96ffcd4e-b568-448b-b18e-68aac759b662 became leader
	I0819 13:25:38.240850       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-823470_96ffcd4e-b568-448b-b18e-68aac759b662!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-823470 -n embed-certs-823470
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-823470 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-hkt29
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-823470 describe pod metrics-server-6867b74b74-hkt29
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-823470 describe pod metrics-server-6867b74b74-hkt29: exit status 1 (65.084876ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-hkt29" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-823470 describe pod metrics-server-6867b74b74-hkt29: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0819 13:26:26.668433 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:27:44.026517 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:27:51.877762 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-441627 -n no-preload-441627
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-19 13:35:25.579487478 +0000 UTC m=+5926.214277758
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-441627 -n no-preload-441627
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-441627 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-441627 logs -n 25: (2.046878839s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-093052             | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-093052                  | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-093052 --memory=2200 --alsologtostderr   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-038572  | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC |                     |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-093052 image list                           | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-441627                  | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-620467        | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-823470            | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-038572       | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:24 UTC |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-620467             | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-823470                 | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC | 19 Aug 24 13:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 13:17:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 13:17:40.922225 2153523 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:17:40.922462 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922470 2153523 out.go:358] Setting ErrFile to fd 2...
	I0819 13:17:40.922475 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922631 2153523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:17:40.923154 2153523 out.go:352] Setting JSON to false
	I0819 13:17:40.924134 2153523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":165611,"bootTime":1723907850,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:17:40.924233 2153523 start.go:139] virtualization: kvm guest
	I0819 13:17:40.927254 2153523 out.go:177] * [embed-certs-823470] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:17:40.928415 2153523 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:17:40.928457 2153523 notify.go:220] Checking for updates...
	I0819 13:17:40.930421 2153523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:17:40.931440 2153523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:17:40.932454 2153523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:17:40.933815 2153523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:17:40.935189 2153523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:17:40.936761 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:17:40.937143 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.937209 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.952461 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0819 13:17:40.952868 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.953423 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.953455 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.953809 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.954009 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:40.954280 2153523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:17:40.954572 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.954610 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.970308 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0819 13:17:40.970901 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.971446 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.971478 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.971798 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.972002 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:41.006606 2153523 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 13:17:41.007689 2153523 start.go:297] selected driver: kvm2
	I0819 13:17:41.007712 2153523 start.go:901] validating driver "kvm2" against &{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.007878 2153523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:17:41.008964 2153523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.009068 2153523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:17:41.024433 2153523 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:17:41.024970 2153523 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:17:41.025027 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:17:41.025044 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:17:41.025112 2153523 start.go:340] cluster config:
	{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.025259 2153523 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.027074 2153523 out.go:177] * Starting "embed-certs-823470" primary control-plane node in "embed-certs-823470" cluster
	I0819 13:17:37.004434 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:41.028219 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:17:41.028273 2153523 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:17:41.028296 2153523 cache.go:56] Caching tarball of preloaded images
	I0819 13:17:41.028407 2153523 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:17:41.028424 2153523 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 13:17:41.028580 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:17:41.028861 2153523 start.go:360] acquireMachinesLock for embed-certs-823470: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:17:43.084440 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:46.156450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:52.236466 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:55.308454 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:01.388472 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:04.460482 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:10.540493 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:13.612449 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:19.692412 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:22.764436 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:28.844443 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:31.916476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:37.996470 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:41.068499 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:47.148488 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:50.220507 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:56.300450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:59.372511 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:05.452504 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:08.524521 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:14.604456 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:17.676406 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:23.756476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:26.828515 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:32.908445 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:35.980361 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:38.985083 2152365 start.go:364] duration metric: took 4m0.580875322s to acquireMachinesLock for "default-k8s-diff-port-038572"
	I0819 13:19:38.985152 2152365 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:38.985171 2152365 fix.go:54] fixHost starting: 
	I0819 13:19:38.985542 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:38.985578 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:39.001015 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I0819 13:19:39.001515 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:39.002024 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:19:39.002048 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:39.002379 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:39.002572 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:39.002734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:19:39.004320 2152365 fix.go:112] recreateIfNeeded on default-k8s-diff-port-038572: state=Stopped err=<nil>
	I0819 13:19:39.004361 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	W0819 13:19:39.004545 2152365 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:39.006073 2152365 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-038572" ...
	I0819 13:19:38.982266 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:38.982325 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982657 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:19:38.982688 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982924 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:19:38.984929 2151869 machine.go:96] duration metric: took 4m37.397817873s to provisionDockerMachine
	I0819 13:19:38.984974 2151869 fix.go:56] duration metric: took 4m37.421262151s for fixHost
	I0819 13:19:38.984981 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 4m37.421326953s
	W0819 13:19:38.985006 2151869 start.go:714] error starting host: provision: host is not running
	W0819 13:19:38.985119 2151869 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 13:19:38.985132 2151869 start.go:729] Will try again in 5 seconds ...
	I0819 13:19:39.007314 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Start
	I0819 13:19:39.007497 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring networks are active...
	I0819 13:19:39.008358 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network default is active
	I0819 13:19:39.008748 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network mk-default-k8s-diff-port-038572 is active
	I0819 13:19:39.009091 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Getting domain xml...
	I0819 13:19:39.009732 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Creating domain...
	I0819 13:19:40.228941 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting to get IP...
	I0819 13:19:40.229930 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230312 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230369 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.230279 2153980 retry.go:31] will retry after 222.94238ms: waiting for machine to come up
	I0819 13:19:40.455010 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455640 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.455562 2153980 retry.go:31] will retry after 314.241366ms: waiting for machine to come up
	I0819 13:19:40.771105 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771564 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771593 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.771511 2153980 retry.go:31] will retry after 475.191571ms: waiting for machine to come up
	I0819 13:19:41.248037 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248436 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248462 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.248377 2153980 retry.go:31] will retry after 547.125638ms: waiting for machine to come up
	I0819 13:19:41.797263 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797702 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.797653 2153980 retry.go:31] will retry after 581.6163ms: waiting for machine to come up
	I0819 13:19:42.380743 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381257 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:42.381195 2153980 retry.go:31] will retry after 661.080111ms: waiting for machine to come up
	I0819 13:19:43.044143 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044615 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044645 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.044566 2153980 retry.go:31] will retry after 815.359371ms: waiting for machine to come up
	I0819 13:19:43.986791 2151869 start.go:360] acquireMachinesLock for no-preload-441627: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:19:43.861792 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862277 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.862175 2153980 retry.go:31] will retry after 1.076851262s: waiting for machine to come up
	I0819 13:19:44.940944 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941544 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:44.941431 2153980 retry.go:31] will retry after 1.492394345s: waiting for machine to come up
	I0819 13:19:46.436200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436720 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:46.436617 2153980 retry.go:31] will retry after 1.567728586s: waiting for machine to come up
	I0819 13:19:48.006390 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006917 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:48.006810 2153980 retry.go:31] will retry after 1.799313771s: waiting for machine to come up
	I0819 13:19:49.807716 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808103 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808135 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:49.808029 2153980 retry.go:31] will retry after 2.910106345s: waiting for machine to come up
	I0819 13:19:52.722235 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722644 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:52.722554 2153980 retry.go:31] will retry after 2.846975794s: waiting for machine to come up
	I0819 13:19:56.801408 2152780 start.go:364] duration metric: took 3m10.884201023s to acquireMachinesLock for "old-k8s-version-620467"
	I0819 13:19:56.801488 2152780 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:56.801498 2152780 fix.go:54] fixHost starting: 
	I0819 13:19:56.802005 2152780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:56.802046 2152780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:56.819893 2152780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0819 13:19:56.820323 2152780 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:56.820785 2152780 main.go:141] libmachine: Using API Version  1
	I0819 13:19:56.820813 2152780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:56.821126 2152780 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:56.821314 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:19:56.821452 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetState
	I0819 13:19:56.823015 2152780 fix.go:112] recreateIfNeeded on old-k8s-version-620467: state=Stopped err=<nil>
	I0819 13:19:56.823056 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	W0819 13:19:56.823300 2152780 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:56.824822 2152780 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-620467" ...
	I0819 13:19:55.571761 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572317 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Found IP for machine: 192.168.72.69
	I0819 13:19:55.572351 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has current primary IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserving static IP address...
	I0819 13:19:55.572819 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.572863 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | skip adding static IP to network mk-default-k8s-diff-port-038572 - found existing host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"}
	I0819 13:19:55.572879 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserved static IP address: 192.168.72.69
	I0819 13:19:55.572897 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for SSH to be available...
	I0819 13:19:55.572914 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Getting to WaitForSSH function...
	I0819 13:19:55.575208 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575636 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.575662 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575751 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH client type: external
	I0819 13:19:55.575797 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa (-rw-------)
	I0819 13:19:55.575834 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:19:55.575851 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | About to run SSH command:
	I0819 13:19:55.575895 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | exit 0
	I0819 13:19:55.696082 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | SSH cmd err, output: <nil>: 
	I0819 13:19:55.696501 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetConfigRaw
	I0819 13:19:55.697226 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:55.699788 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700112 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.700146 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700429 2152365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/config.json ...
	I0819 13:19:55.700717 2152365 machine.go:93] provisionDockerMachine start ...
	I0819 13:19:55.700742 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:55.700975 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.703239 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703521 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.703556 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.703887 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704055 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.704360 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.704635 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.704651 2152365 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:19:55.804602 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:19:55.804632 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.804922 2152365 buildroot.go:166] provisioning hostname "default-k8s-diff-port-038572"
	I0819 13:19:55.804951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.805155 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.807793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.808144 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808318 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.808510 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808641 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808750 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.808888 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.809108 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.809123 2152365 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-038572 && echo "default-k8s-diff-port-038572" | sudo tee /etc/hostname
	I0819 13:19:55.922378 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-038572
	
	I0819 13:19:55.922428 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.925767 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926131 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.926172 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926350 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.926563 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926747 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.927130 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.927403 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.927433 2152365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-038572' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-038572/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-038572' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:19:56.033031 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:56.033082 2152365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:19:56.033108 2152365 buildroot.go:174] setting up certificates
	I0819 13:19:56.033121 2152365 provision.go:84] configureAuth start
	I0819 13:19:56.033132 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:56.033450 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.036371 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.036898 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.036926 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.037063 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.039494 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039814 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.039844 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039962 2152365 provision.go:143] copyHostCerts
	I0819 13:19:56.040055 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:19:56.040073 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:19:56.040181 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:19:56.040329 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:19:56.040342 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:19:56.040394 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:19:56.040582 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:19:56.040611 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:19:56.040666 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:19:56.040752 2152365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-038572 san=[127.0.0.1 192.168.72.69 default-k8s-diff-port-038572 localhost minikube]
	I0819 13:19:56.155224 2152365 provision.go:177] copyRemoteCerts
	I0819 13:19:56.155322 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:19:56.155366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.157977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158360 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.158393 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158573 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.158780 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.159002 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.159203 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.238700 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:19:56.263555 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0819 13:19:56.288212 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:19:56.311817 2152365 provision.go:87] duration metric: took 278.679594ms to configureAuth
	I0819 13:19:56.311851 2152365 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:19:56.312067 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:19:56.312187 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.315149 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315598 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.315630 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315801 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.315977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316115 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316244 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.316426 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.316653 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.316770 2152365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:19:56.575936 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:19:56.575977 2152365 machine.go:96] duration metric: took 875.241726ms to provisionDockerMachine
	I0819 13:19:56.576012 2152365 start.go:293] postStartSetup for "default-k8s-diff-port-038572" (driver="kvm2")
	I0819 13:19:56.576030 2152365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:19:56.576068 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.576426 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:19:56.576468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.579017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579420 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.579453 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579553 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.579726 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.579891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.580030 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.661017 2152365 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:19:56.665287 2152365 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:19:56.665315 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:19:56.665387 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:19:56.665469 2152365 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:19:56.665559 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:19:56.675115 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:19:56.698728 2152365 start.go:296] duration metric: took 122.697439ms for postStartSetup
	I0819 13:19:56.698771 2152365 fix.go:56] duration metric: took 17.713602497s for fixHost
	I0819 13:19:56.698793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.701384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701733 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.701766 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.702116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702284 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702444 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.702625 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.702822 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.702836 2152365 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:19:56.801168 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073596.772802095
	
	I0819 13:19:56.801210 2152365 fix.go:216] guest clock: 1724073596.772802095
	I0819 13:19:56.801223 2152365 fix.go:229] Guest: 2024-08-19 13:19:56.772802095 +0000 UTC Remote: 2024-08-19 13:19:56.698775345 +0000 UTC m=+258.438276725 (delta=74.02675ms)
	I0819 13:19:56.801278 2152365 fix.go:200] guest clock delta is within tolerance: 74.02675ms
	I0819 13:19:56.801287 2152365 start.go:83] releasing machines lock for "default-k8s-diff-port-038572", held for 17.81615508s
	I0819 13:19:56.801326 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.801669 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.804805 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805198 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.805242 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805377 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806035 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806291 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806387 2152365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:19:56.806442 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.806562 2152365 ssh_runner.go:195] Run: cat /version.json
	I0819 13:19:56.806592 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.809097 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809446 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.809476 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809513 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809599 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.809776 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.809951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810028 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.810053 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.810115 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.810256 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.810456 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.810627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810777 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.905573 2152365 ssh_runner.go:195] Run: systemctl --version
	I0819 13:19:56.911395 2152365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:19:57.053460 2152365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:19:57.060012 2152365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:19:57.060091 2152365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:19:57.076651 2152365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:19:57.076678 2152365 start.go:495] detecting cgroup driver to use...
	I0819 13:19:57.076746 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:19:57.095982 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:19:57.109198 2152365 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:19:57.109282 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:19:57.124552 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:19:57.141665 2152365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:19:57.250580 2152365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:19:57.395902 2152365 docker.go:233] disabling docker service ...
	I0819 13:19:57.395986 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:19:57.411232 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:19:57.424263 2152365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:19:57.576953 2152365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:19:57.695946 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:19:57.709501 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:19:57.727815 2152365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:19:57.727889 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.738210 2152365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:19:57.738284 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.748534 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.759949 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.770744 2152365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:19:57.781844 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.792047 2152365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.810916 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.826741 2152365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:19:57.839687 2152365 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:19:57.839753 2152365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:19:57.855192 2152365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:19:57.865326 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:19:57.991646 2152365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:19:58.136425 2152365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:19:58.136522 2152365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:19:58.141971 2152365 start.go:563] Will wait 60s for crictl version
	I0819 13:19:58.142047 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:19:58.146029 2152365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:19:58.182474 2152365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:19:58.182580 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.212887 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.242578 2152365 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:19:58.243711 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:58.246652 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.246985 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:58.247031 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.247262 2152365 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 13:19:58.251445 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:19:58.266177 2152365 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:19:58.266324 2152365 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:19:58.266417 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:19:56.826004 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .Start
	I0819 13:19:56.826161 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring networks are active...
	I0819 13:19:56.826953 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network default is active
	I0819 13:19:56.827328 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network mk-old-k8s-version-620467 is active
	I0819 13:19:56.827789 2152780 main.go:141] libmachine: (old-k8s-version-620467) Getting domain xml...
	I0819 13:19:56.828485 2152780 main.go:141] libmachine: (old-k8s-version-620467) Creating domain...
	I0819 13:19:58.099058 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting to get IP...
	I0819 13:19:58.099866 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.100256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.100359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.100246 2154107 retry.go:31] will retry after 217.468132ms: waiting for machine to come up
	I0819 13:19:58.319821 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.320335 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.320359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.320295 2154107 retry.go:31] will retry after 270.947251ms: waiting for machine to come up
	I0819 13:19:58.592897 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.593586 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.593610 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.593526 2154107 retry.go:31] will retry after 389.864812ms: waiting for machine to come up
	I0819 13:19:58.985269 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.985722 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.985757 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.985674 2154107 retry.go:31] will retry after 559.916587ms: waiting for machine to come up
	I0819 13:19:59.547846 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:59.548504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:59.548538 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:59.548446 2154107 retry.go:31] will retry after 759.198751ms: waiting for machine to come up
	I0819 13:20:00.309569 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:00.310093 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:00.310126 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:00.310041 2154107 retry.go:31] will retry after 836.488095ms: waiting for machine to come up
	I0819 13:19:58.306336 2152365 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:19:58.306490 2152365 ssh_runner.go:195] Run: which lz4
	I0819 13:19:58.311147 2152365 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:19:58.315715 2152365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:19:58.315751 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:19:59.751536 2152365 crio.go:462] duration metric: took 1.440444093s to copy over tarball
	I0819 13:19:59.751611 2152365 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:01.909604 2152365 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157959479s)
	I0819 13:20:01.909646 2152365 crio.go:469] duration metric: took 2.158078046s to extract the tarball
	I0819 13:20:01.909667 2152365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:01.954789 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:02.002755 2152365 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:02.002786 2152365 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:02.002805 2152365 kubeadm.go:934] updating node { 192.168.72.69 8444 v1.31.0 crio true true} ...
	I0819 13:20:02.002970 2152365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-038572 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:02.003061 2152365 ssh_runner.go:195] Run: crio config
	I0819 13:20:02.061207 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:02.061238 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:02.061256 2152365 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:02.061295 2152365 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.69 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-038572 NodeName:default-k8s-diff-port-038572 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:02.061496 2152365 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.69
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-038572"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:02.061628 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:02.072573 2152365 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:02.072676 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:02.083114 2152365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0819 13:20:02.100736 2152365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:02.117359 2152365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 13:20:02.135642 2152365 ssh_runner.go:195] Run: grep 192.168.72.69	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:02.139682 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:02.152242 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:02.278949 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:02.296896 2152365 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572 for IP: 192.168.72.69
	I0819 13:20:02.296925 2152365 certs.go:194] generating shared ca certs ...
	I0819 13:20:02.296949 2152365 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:02.297166 2152365 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:02.297230 2152365 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:02.297248 2152365 certs.go:256] generating profile certs ...
	I0819 13:20:02.297375 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/client.key
	I0819 13:20:02.297441 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key.ca317f39
	I0819 13:20:02.297489 2152365 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key
	I0819 13:20:02.297641 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:02.297692 2152365 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:02.297705 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:02.297730 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:02.297752 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:02.297778 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:02.297821 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:02.298812 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:02.332660 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:02.376995 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:02.418243 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:02.455354 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 13:20:02.488839 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:02.514418 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:02.540797 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:20:02.565628 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:02.590617 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:02.614942 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:02.639455 2152365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:02.657597 2152365 ssh_runner.go:195] Run: openssl version
	I0819 13:20:02.663545 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:02.674396 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679061 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679138 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.685136 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:02.696428 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:02.707271 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.711988 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.712052 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.717933 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:02.728982 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:02.739866 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744728 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744799 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.750660 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:02.761203 2152365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:02.765996 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:02.772371 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:02.778453 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:02.784645 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:02.790342 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:02.795882 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:02.801355 2152365 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:02.801448 2152365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:02.801536 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.839586 2152365 cri.go:89] found id: ""
	I0819 13:20:02.839684 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:02.850336 2152365 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:02.850363 2152365 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:02.850424 2152365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:02.860676 2152365 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:02.861753 2152365 kubeconfig.go:125] found "default-k8s-diff-port-038572" server: "https://192.168.72.69:8444"
	I0819 13:20:02.863979 2152365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:02.875284 2152365 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.69
	I0819 13:20:02.875321 2152365 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:02.875337 2152365 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:02.875398 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.914277 2152365 cri.go:89] found id: ""
	I0819 13:20:02.914383 2152365 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:02.931976 2152365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:02.941897 2152365 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:02.941924 2152365 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:02.941994 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 13:20:02.951103 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:02.951179 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:02.960642 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 13:20:02.970204 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:02.970278 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:02.980235 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.989306 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:02.989371 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.998821 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 13:20:03.007907 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:03.007978 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:03.017698 2152365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:03.027048 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:03.158491 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:01.148388 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:01.148845 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:01.148874 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:01.148801 2154107 retry.go:31] will retry after 1.045849266s: waiting for machine to come up
	I0819 13:20:02.196768 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:02.197307 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:02.197341 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:02.197243 2154107 retry.go:31] will retry after 1.09397017s: waiting for machine to come up
	I0819 13:20:03.292733 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:03.293195 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:03.293233 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:03.293141 2154107 retry.go:31] will retry after 1.29393597s: waiting for machine to come up
	I0819 13:20:04.588965 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:04.589429 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:04.589463 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:04.589356 2154107 retry.go:31] will retry after 1.963033352s: waiting for machine to come up
	I0819 13:20:04.318435 2152365 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.159898195s)
	I0819 13:20:04.318475 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.536005 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.600370 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.701830 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:04.701954 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.202875 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.702925 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.718529 2152365 api_server.go:72] duration metric: took 1.01671668s to wait for apiserver process to appear ...
	I0819 13:20:05.718570 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:05.718598 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.393400 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.393440 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.393459 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.434960 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.434992 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.719450 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.725655 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:08.725689 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.218770 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.224234 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:09.224273 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.718902 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.727048 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:20:09.733609 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:09.733641 2152365 api_server.go:131] duration metric: took 4.015063389s to wait for apiserver health ...
	I0819 13:20:09.733652 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:09.733660 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:09.735422 2152365 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:06.553692 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:06.554176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:06.554208 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:06.554119 2154107 retry.go:31] will retry after 2.420122298s: waiting for machine to come up
	I0819 13:20:08.975682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:08.976129 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:08.976176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:08.976073 2154107 retry.go:31] will retry after 2.64333029s: waiting for machine to come up
	I0819 13:20:09.736901 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:09.747568 2152365 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:09.766156 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:09.777905 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:09.777944 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:09.777953 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:09.777963 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:09.777978 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:09.777990 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:20:09.778006 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:09.778017 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:09.778029 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:20:09.778040 2152365 system_pods.go:74] duration metric: took 11.855811ms to wait for pod list to return data ...
	I0819 13:20:09.778050 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:09.787946 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:09.787991 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:09.788010 2152365 node_conditions.go:105] duration metric: took 9.95462ms to run NodePressure ...
	I0819 13:20:09.788035 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:10.079231 2152365 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083839 2152365 kubeadm.go:739] kubelet initialised
	I0819 13:20:10.083865 2152365 kubeadm.go:740] duration metric: took 4.599976ms waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083876 2152365 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:10.089020 2152365 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.094701 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094738 2152365 pod_ready.go:82] duration metric: took 5.686754ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.094751 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094760 2152365 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.099257 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099292 2152365 pod_ready.go:82] duration metric: took 4.514562ms for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.099308 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099317 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.103745 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103778 2152365 pod_ready.go:82] duration metric: took 4.449246ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.103799 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103811 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.170603 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170636 2152365 pod_ready.go:82] duration metric: took 66.80935ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.170649 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170655 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.569837 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569869 2152365 pod_ready.go:82] duration metric: took 399.203757ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.569879 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569892 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.970515 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970553 2152365 pod_ready.go:82] duration metric: took 400.651091ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.970571 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970600 2152365 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:11.371258 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371306 2152365 pod_ready.go:82] duration metric: took 400.692872ms for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:11.371324 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371335 2152365 pod_ready.go:39] duration metric: took 1.287441542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:11.371365 2152365 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:20:11.387535 2152365 ops.go:34] apiserver oom_adj: -16
	I0819 13:20:11.387559 2152365 kubeadm.go:597] duration metric: took 8.537187666s to restartPrimaryControlPlane
	I0819 13:20:11.387568 2152365 kubeadm.go:394] duration metric: took 8.586226152s to StartCluster
	I0819 13:20:11.387585 2152365 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.387664 2152365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:11.389218 2152365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.389474 2152365 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:20:11.389580 2152365 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:20:11.389706 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:11.389711 2152365 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389755 2152365 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-038572"
	I0819 13:20:11.389764 2152365 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-038572"
	W0819 13:20:11.389767 2152365 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:20:11.389751 2152365 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389804 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.389806 2152365 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.389818 2152365 addons.go:243] addon metrics-server should already be in state true
	I0819 13:20:11.389827 2152365 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-038572"
	I0819 13:20:11.389849 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.390217 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390246 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390254 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390283 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390305 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390341 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.391417 2152365 out.go:177] * Verifying Kubernetes components...
	I0819 13:20:11.392645 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:11.406609 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0819 13:20:11.407081 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.407694 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.407723 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.408143 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.408421 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.411206 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0819 13:20:11.411468 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0819 13:20:11.411740 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.411900 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.412296 2152365 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.412319 2152365 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:20:11.412335 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412351 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.412393 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412482 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412508 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412744 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.412758 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.412793 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.412794 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.413326 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413350 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413374 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.413399 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.429280 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40269
	I0819 13:20:11.429783 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.430330 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.430354 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.430797 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.430981 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.433018 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.433214 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I0819 13:20:11.433651 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.433708 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0819 13:20:11.434138 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434152 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434199 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.434455 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.434636 2152365 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:11.434806 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434830 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434988 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.435019 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.435164 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.435375 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.435924 2152365 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.435944 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:20:11.435965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.437309 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.438592 2152365 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:20:11.439482 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.439584 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:20:11.439598 2152365 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:20:11.439617 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.439905 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.439943 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.440077 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.440243 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.440399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.440517 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.442627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.442965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.442990 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.443134 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.443274 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.443500 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.443627 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.457318 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0819 13:20:11.457870 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.458444 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.458481 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.458854 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.459017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.460870 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.461105 2152365 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.461121 2152365 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:20:11.461138 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.463902 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.464415 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464581 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.464764 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.464942 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.465089 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.620783 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:11.641158 2152365 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:11.745067 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.783221 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:20:11.783260 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:20:11.791134 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.848325 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:20:11.848359 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:20:11.955559 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:11.955670 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:20:12.022022 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:12.892823 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.101637914s)
	I0819 13:20:12.892896 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.892909 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.892966 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147858081s)
	I0819 13:20:12.893019 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893033 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893201 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893327 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893354 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893366 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893375 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893377 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893386 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893391 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893630 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893681 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893704 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893677 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893729 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893657 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.899954 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.899970 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.900202 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.900220 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.900223 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972429 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972465 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.972842 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972866 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.972882 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.972897 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.973181 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.973225 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.973235 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.973289 2152365 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-038572"
	I0819 13:20:12.975602 2152365 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:20:12.976614 2152365 addons.go:510] duration metric: took 1.587039687s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:20:15.704949 2153523 start.go:364] duration metric: took 2m34.676043713s to acquireMachinesLock for "embed-certs-823470"
	I0819 13:20:15.705034 2153523 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:15.705058 2153523 fix.go:54] fixHost starting: 
	I0819 13:20:15.705598 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:15.705649 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:15.726423 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0819 13:20:15.726876 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:15.727565 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:20:15.727603 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:15.727986 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:15.728273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:15.728457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:20:15.730123 2153523 fix.go:112] recreateIfNeeded on embed-certs-823470: state=Stopped err=<nil>
	I0819 13:20:15.730167 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	W0819 13:20:15.730354 2153523 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:15.732004 2153523 out.go:177] * Restarting existing kvm2 VM for "embed-certs-823470" ...
	I0819 13:20:11.621327 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:11.621751 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:11.621783 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:11.621716 2154107 retry.go:31] will retry after 2.756598641s: waiting for machine to come up
	I0819 13:20:14.379562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380111 2152780 main.go:141] libmachine: (old-k8s-version-620467) Found IP for machine: 192.168.39.238
	I0819 13:20:14.380135 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserving static IP address...
	I0819 13:20:14.380149 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has current primary IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380616 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.380648 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserved static IP address: 192.168.39.238
	I0819 13:20:14.380673 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | skip adding static IP to network mk-old-k8s-version-620467 - found existing host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"}
	I0819 13:20:14.380687 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Getting to WaitForSSH function...
	I0819 13:20:14.380699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting for SSH to be available...
	I0819 13:20:14.383088 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383484 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.383516 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383650 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH client type: external
	I0819 13:20:14.383682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa (-rw-------)
	I0819 13:20:14.383720 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:14.383731 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | About to run SSH command:
	I0819 13:20:14.383745 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | exit 0
	I0819 13:20:14.508402 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:14.508766 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetConfigRaw
	I0819 13:20:14.509427 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.512053 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.512573 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512759 2152780 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/config.json ...
	I0819 13:20:14.512944 2152780 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:14.512962 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:14.513159 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.515266 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515554 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.515588 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515743 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.515901 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516039 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516142 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.516308 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.516582 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.516597 2152780 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:14.628611 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:14.628650 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.628968 2152780 buildroot.go:166] provisioning hostname "old-k8s-version-620467"
	I0819 13:20:14.629003 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.629202 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.631844 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632148 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.632211 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632516 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.632706 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632870 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632975 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.633140 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.633383 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.633400 2152780 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-620467 && echo "old-k8s-version-620467" | sudo tee /etc/hostname
	I0819 13:20:14.759803 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-620467
	
	I0819 13:20:14.759840 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.763058 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763513 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.763553 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763807 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.764027 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764208 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764523 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.764729 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.764986 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.765013 2152780 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-620467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-620467/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-620467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:14.890791 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:14.890828 2152780 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:14.890861 2152780 buildroot.go:174] setting up certificates
	I0819 13:20:14.890877 2152780 provision.go:84] configureAuth start
	I0819 13:20:14.890890 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.891337 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.894171 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894527 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.894562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894695 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.897240 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.897791 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.897825 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.898011 2152780 provision.go:143] copyHostCerts
	I0819 13:20:14.898080 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:14.898104 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:14.898169 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:14.898317 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:14.898330 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:14.898360 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:14.898434 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:14.898444 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:14.898472 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:14.898542 2152780 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-620467 san=[127.0.0.1 192.168.39.238 localhost minikube old-k8s-version-620467]
	I0819 13:20:15.009796 2152780 provision.go:177] copyRemoteCerts
	I0819 13:20:15.009880 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:15.009931 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.012857 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013153 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.013187 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013486 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.013699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.013862 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.013990 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.104066 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:15.131682 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 13:20:15.158717 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:15.186486 2152780 provision.go:87] duration metric: took 295.591794ms to configureAuth
	I0819 13:20:15.186518 2152780 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:15.186739 2152780 config.go:182] Loaded profile config "old-k8s-version-620467": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 13:20:15.186833 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.189580 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.189997 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.190028 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.190212 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.190389 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190528 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190646 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.190784 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.190958 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.190973 2152780 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:15.461836 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:15.461866 2152780 machine.go:96] duration metric: took 948.910153ms to provisionDockerMachine
	I0819 13:20:15.461878 2152780 start.go:293] postStartSetup for "old-k8s-version-620467" (driver="kvm2")
	I0819 13:20:15.461894 2152780 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:15.461913 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.462313 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:15.462348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.465294 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465707 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.465743 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465949 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.466136 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.466302 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.466443 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.551428 2152780 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:15.556041 2152780 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:15.556063 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:15.556137 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:15.556267 2152780 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:15.556410 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:15.565971 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:15.592103 2152780 start.go:296] duration metric: took 130.206992ms for postStartSetup
	I0819 13:20:15.592153 2152780 fix.go:56] duration metric: took 18.790655174s for fixHost
	I0819 13:20:15.592201 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.595223 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595583 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.595623 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595769 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.596033 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596249 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596391 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.596578 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.596767 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.596780 2152780 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:15.704773 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073615.677156491
	
	I0819 13:20:15.704802 2152780 fix.go:216] guest clock: 1724073615.677156491
	I0819 13:20:15.704811 2152780 fix.go:229] Guest: 2024-08-19 13:20:15.677156491 +0000 UTC Remote: 2024-08-19 13:20:15.592158206 +0000 UTC m=+209.822213223 (delta=84.998285ms)
	I0819 13:20:15.704854 2152780 fix.go:200] guest clock delta is within tolerance: 84.998285ms
	I0819 13:20:15.704862 2152780 start.go:83] releasing machines lock for "old-k8s-version-620467", held for 18.903410478s
	I0819 13:20:15.704893 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.705211 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:15.708190 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708528 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.708558 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708733 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709538 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709631 2152780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:15.709682 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.709793 2152780 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:15.709821 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.712366 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712759 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.712787 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712818 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712943 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713113 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713309 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.713374 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.713407 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.713481 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.713573 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713714 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713872 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.714033 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.732993 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Start
	I0819 13:20:15.733180 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring networks are active...
	I0819 13:20:15.734170 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network default is active
	I0819 13:20:15.734566 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network mk-embed-certs-823470 is active
	I0819 13:20:15.734992 2153523 main.go:141] libmachine: (embed-certs-823470) Getting domain xml...
	I0819 13:20:15.735675 2153523 main.go:141] libmachine: (embed-certs-823470) Creating domain...
	I0819 13:20:15.818839 2152780 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:15.827091 2152780 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:15.989552 2152780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:15.995991 2152780 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:15.996067 2152780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:16.012594 2152780 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:16.012622 2152780 start.go:495] detecting cgroup driver to use...
	I0819 13:20:16.012691 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:16.030071 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:16.048651 2152780 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:16.048715 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:16.063500 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:16.078370 2152780 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:16.194353 2152780 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:16.347921 2152780 docker.go:233] disabling docker service ...
	I0819 13:20:16.347992 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:16.363528 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:16.377515 2152780 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:16.526660 2152780 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:16.672207 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:16.686323 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:16.707555 2152780 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 13:20:16.707634 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.718679 2152780 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:16.718775 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.729446 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.740145 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.751813 2152780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:16.767680 2152780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:16.779680 2152780 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:16.779754 2152780 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:16.798205 2152780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:16.812684 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:16.999666 2152780 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:17.159585 2152780 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:17.159673 2152780 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:17.165707 2152780 start.go:563] Will wait 60s for crictl version
	I0819 13:20:17.165781 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:17.170949 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:17.222170 2152780 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:17.222248 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.250627 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.278578 2152780 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 13:20:13.644589 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:15.645507 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:18.146247 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:17.279632 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:17.282855 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:17.283289 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283501 2152780 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:17.288251 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:17.301778 2152780 kubeadm.go:883] updating cluster {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:17.301950 2152780 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 13:20:17.302022 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:17.348389 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:17.348477 2152780 ssh_runner.go:195] Run: which lz4
	I0819 13:20:17.352759 2152780 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:17.357046 2152780 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:17.357075 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 13:20:19.106587 2152780 crio.go:462] duration metric: took 1.753868264s to copy over tarball
	I0819 13:20:19.106686 2152780 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:17.131501 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting to get IP...
	I0819 13:20:17.132786 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.133215 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.133322 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.133191 2154333 retry.go:31] will retry after 299.991818ms: waiting for machine to come up
	I0819 13:20:17.434764 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.435181 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.435207 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.435148 2154333 retry.go:31] will retry after 236.856343ms: waiting for machine to come up
	I0819 13:20:17.673895 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.674570 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.674599 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.674464 2154333 retry.go:31] will retry after 393.585081ms: waiting for machine to come up
	I0819 13:20:18.070571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.071042 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.071102 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.070993 2154333 retry.go:31] will retry after 544.808617ms: waiting for machine to come up
	I0819 13:20:18.617933 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.618430 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.618462 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.618383 2154333 retry.go:31] will retry after 679.633919ms: waiting for machine to come up
	I0819 13:20:19.299719 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.300300 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.300332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.300244 2154333 retry.go:31] will retry after 612.319499ms: waiting for machine to come up
	I0819 13:20:19.914955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.915514 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.915550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.915448 2154333 retry.go:31] will retry after 831.709673ms: waiting for machine to come up
	I0819 13:20:20.748907 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:20.749441 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:20.749470 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:20.749386 2154333 retry.go:31] will retry after 1.039050841s: waiting for machine to come up
	I0819 13:20:19.147118 2152365 node_ready.go:49] node "default-k8s-diff-port-038572" has status "Ready":"True"
	I0819 13:20:19.147148 2152365 node_ready.go:38] duration metric: took 7.505954816s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:19.147159 2152365 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:19.156331 2152365 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.164963 2152365 pod_ready.go:93] pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:19.164994 2152365 pod_ready.go:82] duration metric: took 8.624394ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.165007 2152365 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:21.171762 2152365 pod_ready.go:103] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:22.814104 2152365 pod_ready.go:93] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.814140 2152365 pod_ready.go:82] duration metric: took 3.649123155s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.814157 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820468 2152365 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.820494 2152365 pod_ready.go:82] duration metric: took 6.328831ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820505 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826256 2152365 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.826288 2152365 pod_ready.go:82] duration metric: took 5.775237ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826302 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832133 2152365 pod_ready.go:93] pod "kube-proxy-wnkk2" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.832182 2152365 pod_ready.go:82] duration metric: took 5.850742ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832198 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837910 2152365 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.837938 2152365 pod_ready.go:82] duration metric: took 5.730926ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837952 2152365 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.098419 2152780 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.991694091s)
	I0819 13:20:22.098466 2152780 crio.go:469] duration metric: took 2.991838198s to extract the tarball
	I0819 13:20:22.098476 2152780 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:22.143215 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:22.183884 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:22.183917 2152780 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:22.183992 2152780 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.184012 2152780 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.184040 2152780 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 13:20:22.184050 2152780 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.183999 2152780 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.184002 2152780 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.184087 2152780 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.184133 2152780 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185819 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.185861 2152780 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 13:20:22.185868 2152780 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.185909 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.185940 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185825 2152780 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.355834 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.356969 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.361175 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.382402 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.393972 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.417644 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 13:20:22.451269 2152780 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 13:20:22.451335 2152780 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.451333 2152780 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 13:20:22.451374 2152780 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.451393 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.451428 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.479570 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.490369 2152780 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 13:20:22.490426 2152780 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.490484 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.519930 2152780 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 13:20:22.519990 2152780 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.520062 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.527794 2152780 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 13:20:22.527836 2152780 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.527922 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540287 2152780 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 13:20:22.540337 2152780 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 13:20:22.540362 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.540381 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540423 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665204 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.665271 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.665317 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.665410 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665441 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.665481 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.725932 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.821227 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.825977 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.826179 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.826276 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.826365 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.826517 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.908181 2152780 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 13:20:22.908249 2152780 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.908306 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.974136 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:23.008192 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 13:20:23.008282 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:23.008318 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:23.008372 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 13:20:23.008406 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:23.008416 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.041323 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 13:20:23.119173 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 13:20:23.119212 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.119216 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 13:20:23.119787 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 13:20:23.158596 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.193414 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 13:20:23.193510 2152780 cache_images.go:92] duration metric: took 1.009576721s to LoadCachedImages
	W0819 13:20:23.193613 2152780 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0819 13:20:23.193632 2152780 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.20.0 crio true true} ...
	I0819 13:20:23.193746 2152780 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-620467 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:23.193835 2152780 ssh_runner.go:195] Run: crio config
	I0819 13:20:23.255630 2152780 cni.go:84] Creating CNI manager for ""
	I0819 13:20:23.255661 2152780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:23.255677 2152780 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:23.255711 2152780 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-620467 NodeName:old-k8s-version-620467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 13:20:23.255897 2152780 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-620467"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:23.255980 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 13:20:23.269531 2152780 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:23.269612 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:23.282542 2152780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0819 13:20:23.302755 2152780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:23.322417 2152780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 13:20:23.342984 2152780 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:23.347053 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:23.360095 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:23.497495 2152780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:23.517984 2152780 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467 for IP: 192.168.39.238
	I0819 13:20:23.518059 2152780 certs.go:194] generating shared ca certs ...
	I0819 13:20:23.518125 2152780 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:23.518359 2152780 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:23.518429 2152780 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:23.518442 2152780 certs.go:256] generating profile certs ...
	I0819 13:20:23.518572 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.key
	I0819 13:20:23.518651 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key.53723c14
	I0819 13:20:23.518701 2152780 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key
	I0819 13:20:23.518856 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:23.518895 2152780 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:23.518909 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:23.518941 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:23.518969 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:23.519009 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:23.519073 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:23.519828 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:23.574690 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:23.616732 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:23.659531 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:23.706724 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 13:20:23.742394 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:23.774793 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:23.801731 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:23.827344 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:23.855338 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:23.880700 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:23.906324 2152780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:23.923792 2152780 ssh_runner.go:195] Run: openssl version
	I0819 13:20:23.929722 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:23.940623 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946214 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946316 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.952580 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:23.963550 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:23.974639 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979429 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979515 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.985465 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:23.996822 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:24.008876 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013845 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013909 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.019736 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:24.032573 2152780 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:24.038620 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:24.046070 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:24.052573 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:24.059242 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:24.065487 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:24.072116 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:24.078756 2152780 kubeadm.go:392] StartCluster: {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:24.078878 2152780 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:24.078950 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.122078 2152780 cri.go:89] found id: ""
	I0819 13:20:24.122170 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:24.133909 2152780 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:24.133932 2152780 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:24.133989 2152780 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:24.144440 2152780 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:24.145449 2152780 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-620467" does not appear in /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:24.146113 2152780 kubeconfig.go:62] /home/jenkins/minikube-integration/19479-2086935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-620467" cluster setting kubeconfig missing "old-k8s-version-620467" context setting]
	I0819 13:20:24.147113 2152780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:24.226327 2152780 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:24.236890 2152780 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.238
	I0819 13:20:24.236922 2152780 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:24.236936 2152780 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:24.236978 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.274144 2152780 cri.go:89] found id: ""
	I0819 13:20:24.274220 2152780 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:24.291049 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:24.302678 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:24.302699 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:24.302746 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:24.312055 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:24.312122 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:24.321518 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:24.330507 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:24.330570 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:24.340192 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.349855 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:24.349914 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.359243 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:24.369020 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:24.369100 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:24.379184 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:24.389785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:24.524009 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.159266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.413924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.519529 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.635520 2152780 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:25.635627 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:21.790800 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:21.791370 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:21.791400 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:21.791324 2154333 retry.go:31] will retry after 1.360798876s: waiting for machine to come up
	I0819 13:20:23.153492 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:23.153913 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:23.153942 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:23.153859 2154333 retry.go:31] will retry after 2.180186803s: waiting for machine to come up
	I0819 13:20:25.337305 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:25.337871 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:25.337903 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:25.337719 2154333 retry.go:31] will retry after 1.933341809s: waiting for machine to come up
	I0819 13:20:24.846640 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:27.344995 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:26.135764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:26.636618 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.136024 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.636046 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.135975 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.636411 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.136478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.636585 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.136183 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.636717 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.273276 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:27.273766 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:27.273796 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:27.273706 2154333 retry.go:31] will retry after 3.242792205s: waiting for machine to come up
	I0819 13:20:30.520205 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:30.520644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:30.520679 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:30.520585 2154333 retry.go:31] will retry after 2.921954134s: waiting for machine to come up
	I0819 13:20:29.843436 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:31.843702 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.021186 2151869 start.go:364] duration metric: took 51.034331815s to acquireMachinesLock for "no-preload-441627"
	I0819 13:20:35.021266 2151869 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:35.021275 2151869 fix.go:54] fixHost starting: 
	I0819 13:20:35.021747 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:35.021788 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:35.039025 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0819 13:20:35.039480 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:35.039951 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:20:35.039978 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:35.040398 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:35.040610 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:35.040779 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:20:35.042642 2151869 fix.go:112] recreateIfNeeded on no-preload-441627: state=Stopped err=<nil>
	I0819 13:20:35.042670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	W0819 13:20:35.042852 2151869 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:35.044620 2151869 out.go:177] * Restarting existing kvm2 VM for "no-preload-441627" ...
	I0819 13:20:31.136686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:31.636239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.136372 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.635927 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.136039 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.635721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.136156 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.636069 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.136557 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.636478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.446149 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.446796 2153523 main.go:141] libmachine: (embed-certs-823470) Found IP for machine: 192.168.61.50
	I0819 13:20:33.446825 2153523 main.go:141] libmachine: (embed-certs-823470) Reserving static IP address...
	I0819 13:20:33.446843 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has current primary IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.447326 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.447357 2153523 main.go:141] libmachine: (embed-certs-823470) Reserved static IP address: 192.168.61.50
	I0819 13:20:33.447375 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | skip adding static IP to network mk-embed-certs-823470 - found existing host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"}
	I0819 13:20:33.447390 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Getting to WaitForSSH function...
	I0819 13:20:33.447402 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting for SSH to be available...
	I0819 13:20:33.449644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.449934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.449966 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.450063 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH client type: external
	I0819 13:20:33.450080 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa (-rw-------)
	I0819 13:20:33.450128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:33.450148 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | About to run SSH command:
	I0819 13:20:33.450160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | exit 0
	I0819 13:20:33.580225 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:33.580660 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetConfigRaw
	I0819 13:20:33.581310 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.583950 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584345 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.584382 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584697 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:20:33.584932 2153523 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:33.584961 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:33.585138 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.587429 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587735 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.587768 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587903 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.588123 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588333 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588507 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.588675 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.588873 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.588886 2153523 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:33.696497 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:33.696528 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696770 2153523 buildroot.go:166] provisioning hostname "embed-certs-823470"
	I0819 13:20:33.696788 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.699605 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.699955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.699991 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.700129 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.700374 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700531 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700668 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.700822 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.701007 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.701021 2153523 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-823470 && echo "embed-certs-823470" | sudo tee /etc/hostname
	I0819 13:20:33.826398 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-823470
	
	I0819 13:20:33.826444 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.829353 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.829751 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.829791 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.830030 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.830215 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830383 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830504 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.830662 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.830879 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.830896 2153523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-823470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-823470/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-823470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:33.952843 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:33.952881 2153523 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:33.952908 2153523 buildroot.go:174] setting up certificates
	I0819 13:20:33.952920 2153523 provision.go:84] configureAuth start
	I0819 13:20:33.952930 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.953214 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.955916 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956302 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.956332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956520 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.958787 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959101 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.959128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959257 2153523 provision.go:143] copyHostCerts
	I0819 13:20:33.959321 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:33.959341 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:33.959391 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:33.959505 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:33.959516 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:33.959537 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:33.959593 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:33.959600 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:33.959618 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:33.959670 2153523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.embed-certs-823470 san=[127.0.0.1 192.168.61.50 embed-certs-823470 localhost minikube]
	I0819 13:20:34.309596 2153523 provision.go:177] copyRemoteCerts
	I0819 13:20:34.309658 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:34.309693 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.312341 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312722 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.312757 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312897 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.313100 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.313221 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.313349 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.400305 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:34.423939 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:34.447427 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:34.474337 2153523 provision.go:87] duration metric: took 521.386141ms to configureAuth
	I0819 13:20:34.474374 2153523 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:34.474559 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:34.474655 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.477282 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477721 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.477754 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.478108 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478296 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478443 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.478661 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.478864 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.478886 2153523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:34.768186 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:34.768232 2153523 machine.go:96] duration metric: took 1.183282981s to provisionDockerMachine
	I0819 13:20:34.768244 2153523 start.go:293] postStartSetup for "embed-certs-823470" (driver="kvm2")
	I0819 13:20:34.768255 2153523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:34.768273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:34.768593 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:34.768626 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.771022 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771451 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.771485 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771636 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.771820 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.771979 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.772095 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.862868 2153523 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:34.867619 2153523 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:34.867659 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:34.867746 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:34.867837 2153523 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:34.867931 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:34.877409 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:34.907227 2153523 start.go:296] duration metric: took 138.963836ms for postStartSetup
	I0819 13:20:34.907285 2153523 fix.go:56] duration metric: took 19.202241044s for fixHost
	I0819 13:20:34.907315 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.909900 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910238 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.910273 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910466 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.910671 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910832 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910952 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.911132 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.911306 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.911316 2153523 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:35.020989 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073634.971972974
	
	I0819 13:20:35.021036 2153523 fix.go:216] guest clock: 1724073634.971972974
	I0819 13:20:35.021046 2153523 fix.go:229] Guest: 2024-08-19 13:20:34.971972974 +0000 UTC Remote: 2024-08-19 13:20:34.907292264 +0000 UTC m=+174.021390970 (delta=64.68071ms)
	I0819 13:20:35.021076 2153523 fix.go:200] guest clock delta is within tolerance: 64.68071ms
	I0819 13:20:35.021082 2153523 start.go:83] releasing machines lock for "embed-certs-823470", held for 19.316080353s
	I0819 13:20:35.021111 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.021414 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:35.024146 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024613 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.024651 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024793 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025291 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025478 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025561 2153523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:35.025612 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.025736 2153523 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:35.025759 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.028550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028973 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029003 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029031 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029062 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029272 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029359 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029551 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029597 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029723 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.029741 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029893 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.137494 2153523 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:35.145726 2153523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:35.296420 2153523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:35.303481 2153523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:35.303566 2153523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:35.323781 2153523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:35.323813 2153523 start.go:495] detecting cgroup driver to use...
	I0819 13:20:35.323893 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:35.343648 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:35.357864 2153523 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:35.357920 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:35.371911 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:35.388107 2153523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:35.496871 2153523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:35.635989 2153523 docker.go:233] disabling docker service ...
	I0819 13:20:35.636061 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:35.651531 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:35.665374 2153523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:35.810969 2153523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:35.045992 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Start
	I0819 13:20:35.046198 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring networks are active...
	I0819 13:20:35.046943 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network default is active
	I0819 13:20:35.047261 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network mk-no-preload-441627 is active
	I0819 13:20:35.047846 2151869 main.go:141] libmachine: (no-preload-441627) Getting domain xml...
	I0819 13:20:35.048719 2151869 main.go:141] libmachine: (no-preload-441627) Creating domain...
	I0819 13:20:36.329569 2151869 main.go:141] libmachine: (no-preload-441627) Waiting to get IP...
	I0819 13:20:36.330479 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.330862 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.330953 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.330842 2154491 retry.go:31] will retry after 214.344142ms: waiting for machine to come up
	I0819 13:20:35.931109 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:35.945746 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:35.963978 2153523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:35.964062 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.973860 2153523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:35.973944 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.983833 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.993733 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.003724 2153523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:36.014327 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.024944 2153523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.043960 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.054444 2153523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:36.064239 2153523 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:36.064325 2153523 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:36.082508 2153523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:36.092677 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:36.216592 2153523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:36.368516 2153523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:36.368599 2153523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:36.373579 2153523 start.go:563] Will wait 60s for crictl version
	I0819 13:20:36.373630 2153523 ssh_runner.go:195] Run: which crictl
	I0819 13:20:36.377433 2153523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:36.416842 2153523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:36.416933 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.445307 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.480522 2153523 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:33.845335 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.847273 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:36.135939 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.136635 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.635748 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.136679 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.636322 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.136560 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.635772 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.135797 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.635764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.481770 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:36.485210 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485665 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:36.485697 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485956 2153523 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:36.490383 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:36.504020 2153523 kubeadm.go:883] updating cluster {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:36.504151 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:36.504236 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:36.540784 2153523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:36.540871 2153523 ssh_runner.go:195] Run: which lz4
	I0819 13:20:36.545693 2153523 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:36.550278 2153523 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:36.550307 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:20:38.008380 2153523 crio.go:462] duration metric: took 1.462725002s to copy over tarball
	I0819 13:20:38.008468 2153523 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:40.267095 2153523 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258591648s)
	I0819 13:20:40.267126 2153523 crio.go:469] duration metric: took 2.258713555s to extract the tarball
	I0819 13:20:40.267133 2153523 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:40.307080 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:40.350343 2153523 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:40.350368 2153523 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:40.350377 2153523 kubeadm.go:934] updating node { 192.168.61.50 8443 v1.31.0 crio true true} ...
	I0819 13:20:40.350504 2153523 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-823470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:40.350594 2153523 ssh_runner.go:195] Run: crio config
	I0819 13:20:40.397534 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:40.397560 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:40.397582 2153523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:40.397612 2153523 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.50 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-823470 NodeName:embed-certs-823470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:40.397804 2153523 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-823470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:40.397880 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:40.409664 2153523 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:40.409738 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:40.420526 2153523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:20:40.437016 2153523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:40.454222 2153523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 13:20:40.471532 2153523 ssh_runner.go:195] Run: grep 192.168.61.50	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:40.475436 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:40.487570 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:40.612612 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:40.630016 2153523 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470 for IP: 192.168.61.50
	I0819 13:20:40.630049 2153523 certs.go:194] generating shared ca certs ...
	I0819 13:20:40.630073 2153523 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:40.630265 2153523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:40.630329 2153523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:40.630343 2153523 certs.go:256] generating profile certs ...
	I0819 13:20:40.630457 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/client.key
	I0819 13:20:40.630553 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key.208ee791
	I0819 13:20:40.630606 2153523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key
	I0819 13:20:40.630756 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:40.630812 2153523 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:40.630824 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:40.630859 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:40.630891 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:40.630915 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:40.630959 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:40.631651 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:40.660149 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:40.699120 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:40.726336 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:40.765233 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 13:20:40.804970 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 13:20:40.846737 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:40.871358 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:40.896688 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:40.920071 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:36.547335 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.547819 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.547855 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.547769 2154491 retry.go:31] will retry after 344.710336ms: waiting for machine to come up
	I0819 13:20:36.894655 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.895213 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.895249 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.895169 2154491 retry.go:31] will retry after 301.307077ms: waiting for machine to come up
	I0819 13:20:37.197664 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.198250 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.198286 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.198182 2154491 retry.go:31] will retry after 516.927721ms: waiting for machine to come up
	I0819 13:20:37.716831 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.717323 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.717359 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.717279 2154491 retry.go:31] will retry after 679.982986ms: waiting for machine to come up
	I0819 13:20:38.398678 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:38.399120 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:38.399157 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:38.399072 2154491 retry.go:31] will retry after 822.30179ms: waiting for machine to come up
	I0819 13:20:39.223045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:39.223575 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:39.223609 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:39.223519 2154491 retry.go:31] will retry after 936.040478ms: waiting for machine to come up
	I0819 13:20:40.161047 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:40.161593 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:40.161624 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:40.161538 2154491 retry.go:31] will retry after 1.301988289s: waiting for machine to come up
	I0819 13:20:38.347577 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:40.845179 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:42.845598 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:41.136326 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:41.635949 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.136274 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.636124 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.135998 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.636101 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.136242 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.635713 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.135796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.635702 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.943162 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:40.969648 2153523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:40.988367 2153523 ssh_runner.go:195] Run: openssl version
	I0819 13:20:40.994353 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:41.006057 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010833 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010908 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.016665 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:41.029047 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:41.040384 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044690 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044755 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.050358 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:41.061626 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:41.073480 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077867 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077941 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.083685 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:41.096051 2153523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:41.100589 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:41.106791 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:41.112757 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:41.119083 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:41.125060 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:41.130528 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:41.136615 2153523 kubeadm.go:392] StartCluster: {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:41.136781 2153523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:41.136848 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.184735 2153523 cri.go:89] found id: ""
	I0819 13:20:41.184820 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:41.195134 2153523 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:41.195162 2153523 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:41.195215 2153523 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:41.205105 2153523 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:41.206197 2153523 kubeconfig.go:125] found "embed-certs-823470" server: "https://192.168.61.50:8443"
	I0819 13:20:41.208434 2153523 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:41.218028 2153523 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.50
	I0819 13:20:41.218059 2153523 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:41.218073 2153523 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:41.218128 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.256254 2153523 cri.go:89] found id: ""
	I0819 13:20:41.256344 2153523 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:41.273223 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:41.283149 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:41.283169 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:41.283228 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:41.292541 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:41.292624 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:41.302453 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:41.312032 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:41.312090 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:41.322411 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.331738 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:41.331817 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.342148 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:41.353410 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:41.353506 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:41.365035 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:41.378556 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:41.502298 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.511293 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.008951961s)
	I0819 13:20:42.511332 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.728318 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.806475 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.889810 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:42.889917 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.390790 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.890775 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.939414 2153523 api_server.go:72] duration metric: took 1.049613567s to wait for apiserver process to appear ...
	I0819 13:20:43.939456 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:43.939495 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:43.940151 2153523 api_server.go:269] stopped: https://192.168.61.50:8443/healthz: Get "https://192.168.61.50:8443/healthz": dial tcp 192.168.61.50:8443: connect: connection refused
	I0819 13:20:44.439965 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:41.465077 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:41.465533 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:41.465577 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:41.465512 2154491 retry.go:31] will retry after 1.34269663s: waiting for machine to come up
	I0819 13:20:42.810485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:42.810988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:42.811017 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:42.810935 2154491 retry.go:31] will retry after 2.118455014s: waiting for machine to come up
	I0819 13:20:44.930721 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:44.931183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:44.931225 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:44.931151 2154491 retry.go:31] will retry after 2.224941461s: waiting for machine to come up
	I0819 13:20:44.846237 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:47.344303 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:46.957397 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.957427 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:46.957459 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:46.997895 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.997932 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:47.440580 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.444945 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.444984 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:47.939553 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.946543 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.946576 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.440206 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.446075 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.446114 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.939602 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.944480 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.944526 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.440053 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.444461 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.444502 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.940036 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.944425 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.944464 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:50.440006 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:50.445299 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:20:50.451900 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:50.451930 2153523 api_server.go:131] duration metric: took 6.512455813s to wait for apiserver health ...
	I0819 13:20:50.451941 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:50.451948 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:50.453835 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:46.135706 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:46.635815 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.136558 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.636410 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.136300 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.636143 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.136463 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.635682 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.135692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.635841 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.454977 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:50.466694 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:50.484966 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:50.507697 2153523 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:50.507738 2153523 system_pods.go:61] "coredns-6f6b679f8f-rxjvv" [7eae7e32-b4c3-4941-9241-16cdde1007fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:50.507750 2153523 system_pods.go:61] "etcd-embed-certs-823470" [e2684e0e-8c4a-40da-ae5b-aa7f3a077e45] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:50.507761 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [c438fbda-c1f4-49f8-8979-1194d91cdeab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:50.507770 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [528e4b3e-b3fd-4455-bb4f-79b4bbdcb434] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:50.507778 2153523 system_pods.go:61] "kube-proxy-jcnwd" [b5302230-969d-409e-8aba-98ff971e9e79] Running
	I0819 13:20:50.507787 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [213067c8-73c6-4eb8-b9dc-f4ea981421f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:50.507798 2153523 system_pods.go:61] "metrics-server-6867b74b74-8dwpf" [7e3ab377-075e-48d8-a9ce-191ba41e3ed7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:50.507812 2153523 system_pods.go:61] "storage-provisioner" [4dd080d9-174f-4c0c-9b06-39e98a71672b] Running
	I0819 13:20:50.507821 2153523 system_pods.go:74] duration metric: took 22.833545ms to wait for pod list to return data ...
	I0819 13:20:50.507831 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:50.512822 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:50.512850 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:50.512862 2153523 node_conditions.go:105] duration metric: took 5.023718ms to run NodePressure ...
	I0819 13:20:50.512877 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:50.853595 2153523 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859172 2153523 kubeadm.go:739] kubelet initialised
	I0819 13:20:50.859200 2153523 kubeadm.go:740] duration metric: took 5.576224ms waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859211 2153523 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:50.865571 2153523 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.871631 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871652 2153523 pod_ready.go:82] duration metric: took 6.059239ms for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.871660 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871666 2153523 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.875242 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875268 2153523 pod_ready.go:82] duration metric: took 3.594665ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.875279 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875287 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.879429 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879460 2153523 pod_ready.go:82] duration metric: took 4.156713ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.879471 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879485 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.887625 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887647 2153523 pod_ready.go:82] duration metric: took 8.149859ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.887655 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887662 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:47.157196 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:47.157589 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:47.157616 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:47.157532 2154491 retry.go:31] will retry after 3.412547504s: waiting for machine to come up
	I0819 13:20:50.571896 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:50.572460 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:50.572485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:50.572401 2154491 retry.go:31] will retry after 3.498419112s: waiting for machine to come up
	I0819 13:20:49.345181 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.346188 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.136307 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.136425 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.636034 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.136443 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.636129 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.135695 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.635860 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.136481 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.635655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.287880 2153523 pod_ready.go:93] pod "kube-proxy-jcnwd" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:51.287908 2153523 pod_ready.go:82] duration metric: took 400.238614ms for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:51.287922 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:53.294631 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.803650 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:54.073302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073795 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has current primary IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073819 2151869 main.go:141] libmachine: (no-preload-441627) Found IP for machine: 192.168.50.161
	I0819 13:20:54.073833 2151869 main.go:141] libmachine: (no-preload-441627) Reserving static IP address...
	I0819 13:20:54.074302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.074337 2151869 main.go:141] libmachine: (no-preload-441627) Reserved static IP address: 192.168.50.161
	I0819 13:20:54.074356 2151869 main.go:141] libmachine: (no-preload-441627) DBG | skip adding static IP to network mk-no-preload-441627 - found existing host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"}
	I0819 13:20:54.074382 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Getting to WaitForSSH function...
	I0819 13:20:54.074398 2151869 main.go:141] libmachine: (no-preload-441627) Waiting for SSH to be available...
	I0819 13:20:54.076312 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076686 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.076719 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076879 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH client type: external
	I0819 13:20:54.076920 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa (-rw-------)
	I0819 13:20:54.076961 2151869 main.go:141] libmachine: (no-preload-441627) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:54.076975 2151869 main.go:141] libmachine: (no-preload-441627) DBG | About to run SSH command:
	I0819 13:20:54.076988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | exit 0
	I0819 13:20:54.208334 2151869 main.go:141] libmachine: (no-preload-441627) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:54.208710 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetConfigRaw
	I0819 13:20:54.209373 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.211827 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212176 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.212210 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212409 2151869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/config.json ...
	I0819 13:20:54.212587 2151869 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:54.212607 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:54.212810 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.214965 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215290 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.215315 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215435 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.215615 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215735 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215887 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.216050 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.216244 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.216256 2151869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:54.332379 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:54.332411 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332718 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:20:54.332753 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332948 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.335520 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.335930 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.335959 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.336119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.336357 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336513 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336654 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.336918 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.337084 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.337104 2151869 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-441627 && echo "no-preload-441627" | sudo tee /etc/hostname
	I0819 13:20:54.466127 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-441627
	
	I0819 13:20:54.466178 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.469045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469434 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.469482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469648 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.469836 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470018 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470174 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.470406 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.470654 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.470676 2151869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-441627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-441627/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-441627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:54.595063 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:54.595111 2151869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:54.595162 2151869 buildroot.go:174] setting up certificates
	I0819 13:20:54.595178 2151869 provision.go:84] configureAuth start
	I0819 13:20:54.595191 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.595554 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.598703 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599031 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.599060 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599202 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.601785 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602203 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.602231 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602593 2151869 provision.go:143] copyHostCerts
	I0819 13:20:54.602661 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:54.602680 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:54.602745 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:54.602878 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:54.602892 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:54.602922 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:54.603010 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:54.603022 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:54.603043 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:54.603116 2151869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.no-preload-441627 san=[127.0.0.1 192.168.50.161 localhost minikube no-preload-441627]
	I0819 13:20:55.027338 2151869 provision.go:177] copyRemoteCerts
	I0819 13:20:55.027408 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:55.027438 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.030073 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.030550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030826 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.031038 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.031231 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.031436 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.122187 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:55.147222 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:55.170341 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:55.195484 2151869 provision.go:87] duration metric: took 600.289644ms to configureAuth
	I0819 13:20:55.195516 2151869 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:55.195757 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:55.195862 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.198859 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199267 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.199294 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199448 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.199666 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199834 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199941 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.200126 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.200340 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.200357 2151869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:55.485890 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:55.485920 2151869 machine.go:96] duration metric: took 1.273319956s to provisionDockerMachine
	I0819 13:20:55.485931 2151869 start.go:293] postStartSetup for "no-preload-441627" (driver="kvm2")
	I0819 13:20:55.485941 2151869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:55.485957 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.486340 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:55.486379 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.489099 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489514 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.489550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489657 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.489866 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.490073 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.490298 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.582461 2151869 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:55.587827 2151869 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:55.587857 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:55.587928 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:55.588015 2151869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:55.588115 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:55.600309 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:55.629229 2151869 start.go:296] duration metric: took 143.280186ms for postStartSetup
	I0819 13:20:55.629281 2151869 fix.go:56] duration metric: took 20.608007215s for fixHost
	I0819 13:20:55.629306 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.632299 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632733 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.632786 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632921 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.633155 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633422 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633640 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.633850 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.634098 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.634117 2151869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:55.752998 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073655.727816487
	
	I0819 13:20:55.753028 2151869 fix.go:216] guest clock: 1724073655.727816487
	I0819 13:20:55.753039 2151869 fix.go:229] Guest: 2024-08-19 13:20:55.727816487 +0000 UTC Remote: 2024-08-19 13:20:55.629285878 +0000 UTC m=+354.230914724 (delta=98.530609ms)
	I0819 13:20:55.753080 2151869 fix.go:200] guest clock delta is within tolerance: 98.530609ms
	I0819 13:20:55.753087 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 20.731851858s
	I0819 13:20:55.753119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.753381 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:55.756443 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.756954 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.756986 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.757124 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757639 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757822 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757927 2151869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:55.757999 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.758066 2151869 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:55.758098 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.760779 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761023 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761162 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761338 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761459 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761515 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761756 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.761845 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761895 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.762041 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.762192 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.846241 2151869 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:55.868048 2151869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:56.013592 2151869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:56.020322 2151869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:56.020403 2151869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:56.045197 2151869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:56.045234 2151869 start.go:495] detecting cgroup driver to use...
	I0819 13:20:56.045325 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:56.062336 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:56.078399 2151869 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:56.078476 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:56.094735 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:56.110097 2151869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:56.238433 2151869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:56.405540 2151869 docker.go:233] disabling docker service ...
	I0819 13:20:56.405638 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:56.421604 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:56.434166 2151869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:56.565116 2151869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:56.688471 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:56.709293 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:56.727575 2151869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:56.727658 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.738492 2151869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:56.738559 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.748563 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.758372 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.768341 2151869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:56.778351 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.788072 2151869 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.805215 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.815496 2151869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:56.825033 2151869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:56.825087 2151869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:56.839732 2151869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:56.851166 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:56.976815 2151869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:57.154020 2151869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:57.154114 2151869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:57.159214 2151869 start.go:563] Will wait 60s for crictl version
	I0819 13:20:57.159282 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.163467 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:57.209305 2151869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:57.209417 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.239241 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.269905 2151869 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:53.844042 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.845906 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:56.135714 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:56.635996 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.135847 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.635970 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.136650 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.636316 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.136506 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.636057 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.136545 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.636030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.297660 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:57.297685 2153523 pod_ready.go:82] duration metric: took 6.009754619s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:57.297696 2153523 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:59.304626 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:57.270865 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:57.273851 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274158 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:57.274186 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274425 2151869 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:57.278843 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:57.296818 2151869 kubeadm.go:883] updating cluster {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:57.296988 2151869 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:57.297042 2151869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:57.345101 2151869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:57.345130 2151869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:57.345185 2151869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.345236 2151869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.345463 2151869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.345553 2151869 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.345586 2151869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.345482 2151869 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 13:20:57.345492 2151869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.345496 2151869 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.347612 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.347625 2151869 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.347661 2151869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.347698 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.347797 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.348298 2151869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.348418 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.348485 2151869 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 13:20:57.521291 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.528953 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.530354 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.538749 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.539434 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.564916 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 13:20:57.572522 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.596418 2151869 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 13:20:57.596483 2151869 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.596536 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.632436 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 13:20:57.690627 2151869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.690666 2151869 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 13:20:57.690725 2151869 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.690677 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690776 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 13:20:57.690827 2151869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.690856 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.711488 2151869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 13:20:57.711546 2151869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.711597 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842702 2151869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 13:20:57.842794 2151869 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 13:20:57.842819 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.842815 2151869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.842845 2151869 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.842917 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842923 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842933 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.842769 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.863031 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.959879 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.959965 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.960047 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.960061 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.960110 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.960298 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.994432 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.109438 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.124785 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:58.124845 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:58.126704 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:58.126791 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:58.126801 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:58.184360 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.198177 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.280314 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 13:20:58.280424 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.280471 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 13:20:58.280579 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:20:58.291181 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 13:20:58.291246 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 13:20:58.291278 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:20:58.291295 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 13:20:58.291333 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:20:58.291370 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:20:58.329433 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 13:20:58.329497 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 13:20:58.329553 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:20:58.329588 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 13:20:58.329597 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:20:58.329602 2151869 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329640 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329660 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 13:20:58.329679 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 13:20:58.329704 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 13:20:58.329747 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 13:20:58.333814 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 13:21:00.287189 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.957515939s)
	I0819 13:21:00.287247 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 13:21:00.287270 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287208 2151869 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.957596318s)
	I0819 13:21:00.287323 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287327 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 13:20:58.346708 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:00.844688 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.845156 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:01.136475 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.136128 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.636458 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.136471 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.636080 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.135958 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.636494 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.136279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.636642 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.803995 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:03.804110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:05.805252 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.037910 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.750558372s)
	I0819 13:21:02.037948 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 13:21:02.037977 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:02.038031 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:04.606941 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.568877079s)
	I0819 13:21:04.606974 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 13:21:04.607003 2151869 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:04.607057 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:05.346854 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.108643 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:06.136247 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:06.636648 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.136212 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.636710 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.136112 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.636655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.136349 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.635778 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.136689 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.635931 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.305329 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:10.805529 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.376612 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.769523227s)
	I0819 13:21:08.376661 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 13:21:08.376690 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:08.376749 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:10.337756 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.960979615s)
	I0819 13:21:10.337805 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 13:21:10.337835 2151869 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.337888 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.982386 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 13:21:10.982434 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.982483 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.345178 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.345444 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:11.135937 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:11.636273 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.136428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.635907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.135893 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.636189 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.136600 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.636483 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.135856 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.635957 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.303597 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:15.804612 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.334483 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351971561s)
	I0819 13:21:12.334524 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 13:21:12.334561 2151869 cache_images.go:123] Successfully loaded all cached images
	I0819 13:21:12.334568 2151869 cache_images.go:92] duration metric: took 14.989422621s to LoadCachedImages
	I0819 13:21:12.334580 2151869 kubeadm.go:934] updating node { 192.168.50.161 8443 v1.31.0 crio true true} ...
	I0819 13:21:12.334746 2151869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-441627 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:21:12.334860 2151869 ssh_runner.go:195] Run: crio config
	I0819 13:21:12.396639 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:12.396662 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:12.396674 2151869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:21:12.396703 2151869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.161 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-441627 NodeName:no-preload-441627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:21:12.396877 2151869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-441627"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:21:12.396958 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:21:12.408527 2151869 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:21:12.408601 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:21:12.418596 2151869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:21:12.435620 2151869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:21:12.452829 2151869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 13:21:12.470756 2151869 ssh_runner.go:195] Run: grep 192.168.50.161	control-plane.minikube.internal$ /etc/hosts
	I0819 13:21:12.474802 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:21:12.486795 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:21:12.612626 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:21:12.639005 2151869 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627 for IP: 192.168.50.161
	I0819 13:21:12.639027 2151869 certs.go:194] generating shared ca certs ...
	I0819 13:21:12.639051 2151869 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:21:12.639231 2151869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:21:12.639296 2151869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:21:12.639310 2151869 certs.go:256] generating profile certs ...
	I0819 13:21:12.639441 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/client.key
	I0819 13:21:12.639535 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key.72b03406
	I0819 13:21:12.639591 2151869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key
	I0819 13:21:12.639752 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:21:12.639811 2151869 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:21:12.639822 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:21:12.639849 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:21:12.639874 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:21:12.639899 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:21:12.639944 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:21:12.640741 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:21:12.674749 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:21:12.700466 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:21:12.726553 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:21:12.753171 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 13:21:12.779864 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:21:12.817715 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:21:12.841844 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:21:12.867122 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:21:12.896767 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:21:12.921671 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:21:12.948478 2151869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:21:12.967603 2151869 ssh_runner.go:195] Run: openssl version
	I0819 13:21:12.973848 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:21:12.984584 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989193 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989248 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.995006 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:21:13.005510 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:21:13.017919 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022858 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022918 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.029040 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:21:13.039654 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:21:13.049968 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054446 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054502 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.060941 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:21:13.071954 2151869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:21:13.076513 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:21:13.082624 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:21:13.088523 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:21:13.094597 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:21:13.100589 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:21:13.106674 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:21:13.112637 2151869 kubeadm.go:392] StartCluster: {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:21:13.112759 2151869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:21:13.112804 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.153143 2151869 cri.go:89] found id: ""
	I0819 13:21:13.153230 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:21:13.164365 2151869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:21:13.164389 2151869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:21:13.164444 2151869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:21:13.174609 2151869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:21:13.175874 2151869 kubeconfig.go:125] found "no-preload-441627" server: "https://192.168.50.161:8443"
	I0819 13:21:13.178228 2151869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:21:13.188495 2151869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.161
	I0819 13:21:13.188536 2151869 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:21:13.188562 2151869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:21:13.188623 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.225547 2151869 cri.go:89] found id: ""
	I0819 13:21:13.225635 2151869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:21:13.242484 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:21:13.251983 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:21:13.252021 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:21:13.252085 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:21:13.260912 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:21:13.260984 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:21:13.270535 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:21:13.279500 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:21:13.279549 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:21:13.288906 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.297861 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:21:13.297914 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.307468 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:21:13.316801 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:21:13.316859 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:21:13.325756 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:21:13.336403 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:13.448374 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.613650 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.165234261s)
	I0819 13:21:14.613681 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.841371 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.904812 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:15.017425 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:21:15.017541 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.518613 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.018036 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.061238 2151869 api_server.go:72] duration metric: took 1.043828546s to wait for apiserver process to appear ...
	I0819 13:21:16.061271 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:21:16.061301 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:14.846330 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:17.344355 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:19.219563 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:21:19.219607 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:21:19.219627 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.350613 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.350647 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:19.562010 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.567205 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.567236 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.062225 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.085272 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:20.085311 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.561434 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.565983 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:21:20.574754 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:21:20.574779 2151869 api_server.go:131] duration metric: took 4.513500352s to wait for apiserver health ...
	I0819 13:21:20.574790 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:20.574797 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:20.576255 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:21:16.136030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.636437 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.135718 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.635876 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.136668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.636021 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.135933 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.636638 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.135892 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.636450 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.804980 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.306777 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.577369 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:21:20.603169 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:21:20.642872 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:21:20.657328 2151869 system_pods.go:59] 8 kube-system pods found
	I0819 13:21:20.657366 2151869 system_pods.go:61] "coredns-6f6b679f8f-4w764" [486b4748-9a70-47b1-b386-e5369ebc2a96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:21:20.657373 2151869 system_pods.go:61] "etcd-no-preload-441627" [8013da47-1ecb-47de-a82c-7b72529b68fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:21:20.657383 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [72190b64-c312-48c0-8494-27e1ad39d877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:21:20.657389 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [60773a38-952f-4f53-a3f3-e54acdf7d196] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:21:20.657395 2151869 system_pods.go:61] "kube-proxy-v76kp" [ac6ad27b-5ee4-4899-8744-82537285a195] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:21:20.657400 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [78dde452-d8d5-498d-a6b1-459e6cb9ac16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:21:20.657405 2151869 system_pods.go:61] "metrics-server-6867b74b74-hd777" [2ce49c7d-af76-459a-82f5-aaa8d2b97c60] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:21:20.657410 2151869 system_pods.go:61] "storage-provisioner" [b67a7c83-46d3-4d75-bfa2-ae944b0c0ab8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:21:20.657417 2151869 system_pods.go:74] duration metric: took 14.518946ms to wait for pod list to return data ...
	I0819 13:21:20.657424 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:21:20.661337 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:21:20.661362 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:21:20.661373 2151869 node_conditions.go:105] duration metric: took 3.944656ms to run NodePressure ...
	I0819 13:21:20.661435 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:20.951411 2151869 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955734 2151869 kubeadm.go:739] kubelet initialised
	I0819 13:21:20.955755 2151869 kubeadm.go:740] duration metric: took 4.317107ms waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955763 2151869 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:21:20.961038 2151869 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.965450 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965473 2151869 pod_ready.go:82] duration metric: took 4.406672ms for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.965481 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965488 2151869 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.969586 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969610 2151869 pod_ready.go:82] duration metric: took 4.112423ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.969620 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969626 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.973542 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973566 2151869 pod_ready.go:82] duration metric: took 3.932252ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.973575 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973583 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.046350 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046390 2151869 pod_ready.go:82] duration metric: took 72.796575ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:21.046405 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046414 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:19.346285 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.346824 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.136149 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:21.635766 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.136148 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.135942 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.636570 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.135982 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.635967 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.136750 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.635977 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:25.636070 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:25.675163 2152780 cri.go:89] found id: ""
	I0819 13:21:25.675196 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.675208 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:25.675216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:25.675294 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:25.711676 2152780 cri.go:89] found id: ""
	I0819 13:21:25.711702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.711711 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:25.711717 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:25.711777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:25.748448 2152780 cri.go:89] found id: ""
	I0819 13:21:25.748479 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.748490 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:25.748499 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:25.748574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:25.786555 2152780 cri.go:89] found id: ""
	I0819 13:21:25.786584 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.786597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:25.786606 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:25.786661 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:22.804004 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:24.804708 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.447200 2151869 pod_ready.go:93] pod "kube-proxy-v76kp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:21.447230 2151869 pod_ready.go:82] duration metric: took 400.803851ms for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.447241 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:23.452136 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.454487 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:23.845593 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:26.345971 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.819620 2152780 cri.go:89] found id: ""
	I0819 13:21:25.819654 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.819665 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:25.819672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:25.819734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:25.853303 2152780 cri.go:89] found id: ""
	I0819 13:21:25.853333 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.853342 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:25.853349 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:25.853401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:25.886285 2152780 cri.go:89] found id: ""
	I0819 13:21:25.886315 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.886325 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:25.886332 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:25.886389 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:25.921320 2152780 cri.go:89] found id: ""
	I0819 13:21:25.921347 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.921356 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:25.921367 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:25.921377 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:25.975400 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:25.975486 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:25.989715 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:25.989741 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:26.115378 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:26.115402 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:26.115415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:26.192370 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:26.192415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:28.739640 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:28.752652 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:28.752729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:28.791883 2152780 cri.go:89] found id: ""
	I0819 13:21:28.791910 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.791918 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:28.791925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:28.791977 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:28.829635 2152780 cri.go:89] found id: ""
	I0819 13:21:28.829668 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.829678 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:28.829686 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:28.829756 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:28.866768 2152780 cri.go:89] found id: ""
	I0819 13:21:28.866804 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.866817 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:28.866825 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:28.866897 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:28.901921 2152780 cri.go:89] found id: ""
	I0819 13:21:28.901954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.901963 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:28.901970 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:28.902025 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:28.945421 2152780 cri.go:89] found id: ""
	I0819 13:21:28.945449 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.945458 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:28.945468 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:28.945535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:28.982857 2152780 cri.go:89] found id: ""
	I0819 13:21:28.982893 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.982905 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:28.982918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:28.982978 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:29.020378 2152780 cri.go:89] found id: ""
	I0819 13:21:29.020415 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.020427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:29.020436 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:29.020502 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:29.055782 2152780 cri.go:89] found id: ""
	I0819 13:21:29.055817 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.055829 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:29.055841 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:29.055864 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:29.107722 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:29.107761 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:29.122304 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:29.122364 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:29.193379 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:29.193415 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:29.193435 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:29.271539 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:29.271578 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:27.304409 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.306462 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:27.952426 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.954179 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:28.346305 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:30.845538 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:31.819829 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:31.832354 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:31.832427 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:31.873620 2152780 cri.go:89] found id: ""
	I0819 13:21:31.873647 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.873656 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:31.873662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:31.873722 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:31.905857 2152780 cri.go:89] found id: ""
	I0819 13:21:31.905891 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.905903 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:31.905911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:31.905993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:31.942187 2152780 cri.go:89] found id: ""
	I0819 13:21:31.942215 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.942227 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:31.942235 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:31.942307 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:31.981439 2152780 cri.go:89] found id: ""
	I0819 13:21:31.981475 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.981486 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:31.981494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:31.981569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:32.019863 2152780 cri.go:89] found id: ""
	I0819 13:21:32.019900 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.019922 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:32.019934 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:32.020000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:32.055007 2152780 cri.go:89] found id: ""
	I0819 13:21:32.055035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.055044 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:32.055050 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:32.055117 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:32.092770 2152780 cri.go:89] found id: ""
	I0819 13:21:32.092809 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.092818 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:32.092824 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:32.092880 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:32.127000 2152780 cri.go:89] found id: ""
	I0819 13:21:32.127035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.127043 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:32.127053 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:32.127067 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:32.179979 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:32.180026 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:32.195377 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:32.195412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:32.269864 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:32.269887 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:32.269902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:32.348035 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:32.348072 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:34.886647 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:34.900410 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:34.900483 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:34.938558 2152780 cri.go:89] found id: ""
	I0819 13:21:34.938588 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.938596 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:34.938603 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:34.938660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:34.975968 2152780 cri.go:89] found id: ""
	I0819 13:21:34.976001 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.976012 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:34.976020 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:34.976105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:35.008967 2152780 cri.go:89] found id: ""
	I0819 13:21:35.008995 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.009003 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:35.009010 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:35.009079 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:35.045918 2152780 cri.go:89] found id: ""
	I0819 13:21:35.045957 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.045966 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:35.045973 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:35.046038 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:35.090605 2152780 cri.go:89] found id: ""
	I0819 13:21:35.090640 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.090652 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:35.090667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:35.090740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:35.152032 2152780 cri.go:89] found id: ""
	I0819 13:21:35.152066 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.152079 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:35.152088 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:35.152182 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:35.197900 2152780 cri.go:89] found id: ""
	I0819 13:21:35.197932 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.197947 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:35.197956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:35.198026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:35.233818 2152780 cri.go:89] found id: ""
	I0819 13:21:35.233853 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.233867 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:35.233884 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:35.233902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:35.316457 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:35.316501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:35.356449 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:35.356480 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:35.409307 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:35.409347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:35.423776 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:35.423807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:35.500868 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:31.803587 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:33.803736 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.805282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:32.452623 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:34.454347 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.454772 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:35.454809 2151869 pod_ready.go:82] duration metric: took 14.007559429s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:35.454826 2151869 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:33.345607 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.347672 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.844185 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:38.001981 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:38.014908 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:38.014991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:38.051110 2152780 cri.go:89] found id: ""
	I0819 13:21:38.051141 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.051154 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:38.051162 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:38.051236 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:38.087476 2152780 cri.go:89] found id: ""
	I0819 13:21:38.087502 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.087511 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:38.087517 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:38.087575 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:38.122929 2152780 cri.go:89] found id: ""
	I0819 13:21:38.122961 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.122972 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:38.122980 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:38.123050 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:38.157352 2152780 cri.go:89] found id: ""
	I0819 13:21:38.157382 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.157394 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:38.157403 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:38.157468 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:38.194914 2152780 cri.go:89] found id: ""
	I0819 13:21:38.194953 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.194968 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:38.194974 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:38.195028 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:38.234036 2152780 cri.go:89] found id: ""
	I0819 13:21:38.234069 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.234078 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:38.234087 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:38.234155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:38.275759 2152780 cri.go:89] found id: ""
	I0819 13:21:38.275787 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.275798 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:38.275806 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:38.275867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:38.312718 2152780 cri.go:89] found id: ""
	I0819 13:21:38.312749 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.312759 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:38.312770 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:38.312785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:38.352032 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:38.352057 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:38.401877 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:38.401912 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:38.415899 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:38.415929 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:38.495576 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:38.495596 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:38.495609 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:38.304885 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.804828 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.461711 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:39.960883 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.343811 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.344371 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:41.073796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:41.095862 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:41.095935 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:41.155760 2152780 cri.go:89] found id: ""
	I0819 13:21:41.155790 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.155820 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:41.155828 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:41.155899 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:41.200108 2152780 cri.go:89] found id: ""
	I0819 13:21:41.200147 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.200177 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:41.200189 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:41.200263 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:41.233462 2152780 cri.go:89] found id: ""
	I0819 13:21:41.233494 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.233506 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:41.233514 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:41.233585 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:41.267089 2152780 cri.go:89] found id: ""
	I0819 13:21:41.267120 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.267133 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:41.267141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:41.267212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:41.302391 2152780 cri.go:89] found id: ""
	I0819 13:21:41.302423 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.302434 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:41.302443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:41.302509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:41.337812 2152780 cri.go:89] found id: ""
	I0819 13:21:41.337843 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.337855 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:41.337864 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:41.337939 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:41.373924 2152780 cri.go:89] found id: ""
	I0819 13:21:41.373954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.373961 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:41.373968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:41.374023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:41.407905 2152780 cri.go:89] found id: ""
	I0819 13:21:41.407950 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.407961 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:41.407974 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:41.407996 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:41.493650 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:41.493693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:41.532411 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:41.532456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:41.584613 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:41.584650 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:41.598337 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:41.598369 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:41.672154 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.172662 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:44.185663 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:44.185730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:44.220825 2152780 cri.go:89] found id: ""
	I0819 13:21:44.220863 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.220876 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:44.220885 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:44.220955 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:44.257261 2152780 cri.go:89] found id: ""
	I0819 13:21:44.257292 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.257300 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:44.257307 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:44.257369 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:44.292652 2152780 cri.go:89] found id: ""
	I0819 13:21:44.292690 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.292701 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:44.292709 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:44.292806 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:44.330064 2152780 cri.go:89] found id: ""
	I0819 13:21:44.330096 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.330108 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:44.330117 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:44.330184 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:44.366863 2152780 cri.go:89] found id: ""
	I0819 13:21:44.366895 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.366906 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:44.366915 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:44.366976 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:44.400335 2152780 cri.go:89] found id: ""
	I0819 13:21:44.400372 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.400385 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:44.400394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:44.400458 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:44.434381 2152780 cri.go:89] found id: ""
	I0819 13:21:44.434411 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.434422 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:44.434430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:44.434498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:44.470985 2152780 cri.go:89] found id: ""
	I0819 13:21:44.471013 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.471024 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:44.471036 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:44.471048 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:44.521552 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:44.521593 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:44.536197 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:44.536230 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:44.613040 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.613063 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:44.613077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:44.694600 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:44.694655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:42.807847 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:45.304916 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.461974 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.464590 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.843926 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.845069 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:47.231770 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:47.245480 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:47.245555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:47.282302 2152780 cri.go:89] found id: ""
	I0819 13:21:47.282334 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.282343 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:47.282351 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:47.282424 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:47.317791 2152780 cri.go:89] found id: ""
	I0819 13:21:47.317828 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.317839 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:47.317847 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:47.317914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:47.355999 2152780 cri.go:89] found id: ""
	I0819 13:21:47.356033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.356044 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:47.356052 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:47.356113 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:47.394652 2152780 cri.go:89] found id: ""
	I0819 13:21:47.394689 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.394702 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:47.394711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:47.394769 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:47.428563 2152780 cri.go:89] found id: ""
	I0819 13:21:47.428599 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.428612 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:47.428621 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:47.428698 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:47.462669 2152780 cri.go:89] found id: ""
	I0819 13:21:47.462702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.462712 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:47.462719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:47.462789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:47.496322 2152780 cri.go:89] found id: ""
	I0819 13:21:47.496349 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.496357 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:47.496366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:47.496419 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:47.527997 2152780 cri.go:89] found id: ""
	I0819 13:21:47.528033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.528045 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:47.528058 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:47.528077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:47.541514 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:47.541542 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:47.612804 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:47.612833 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:47.612851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.692815 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:47.692857 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:47.731057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:47.731092 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.286289 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:50.299376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:50.299457 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:50.335093 2152780 cri.go:89] found id: ""
	I0819 13:21:50.335123 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.335132 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:50.335138 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:50.335212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:50.369353 2152780 cri.go:89] found id: ""
	I0819 13:21:50.369378 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.369386 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:50.369394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:50.369459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:50.404319 2152780 cri.go:89] found id: ""
	I0819 13:21:50.404350 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.404360 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:50.404366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:50.404421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:50.441033 2152780 cri.go:89] found id: ""
	I0819 13:21:50.441064 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.441073 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:50.441080 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:50.441136 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:50.477438 2152780 cri.go:89] found id: ""
	I0819 13:21:50.477463 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.477471 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:50.477485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:50.477541 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:50.511785 2152780 cri.go:89] found id: ""
	I0819 13:21:50.511825 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.511838 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:50.511848 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:50.511925 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:50.545454 2152780 cri.go:89] found id: ""
	I0819 13:21:50.545487 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.545496 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:50.545502 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:50.545563 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:50.582022 2152780 cri.go:89] found id: ""
	I0819 13:21:50.582053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.582061 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:50.582071 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:50.582086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:50.625353 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:50.625384 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.688814 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:50.688851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:50.704225 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:50.704254 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:50.782736 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:50.782771 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:50.782789 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.804084 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.804223 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.961189 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:48.962466 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.344808 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.347615 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.365075 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:53.380049 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:53.380135 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:53.418911 2152780 cri.go:89] found id: ""
	I0819 13:21:53.418944 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.418956 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:53.418964 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:53.419020 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:53.458952 2152780 cri.go:89] found id: ""
	I0819 13:21:53.458986 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.458997 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:53.459005 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:53.459080 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:53.497219 2152780 cri.go:89] found id: ""
	I0819 13:21:53.497254 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.497265 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:53.497273 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:53.497360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:53.533624 2152780 cri.go:89] found id: ""
	I0819 13:21:53.533658 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.533671 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:53.533682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:53.533761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:53.567453 2152780 cri.go:89] found id: ""
	I0819 13:21:53.567490 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.567501 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:53.567509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:53.567574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:53.603438 2152780 cri.go:89] found id: ""
	I0819 13:21:53.603472 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.603482 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:53.603490 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:53.603570 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:53.637608 2152780 cri.go:89] found id: ""
	I0819 13:21:53.637648 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.637659 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:53.637667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:53.637742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:53.684476 2152780 cri.go:89] found id: ""
	I0819 13:21:53.684514 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.684524 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:53.684534 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:53.684550 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:53.699195 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:53.699227 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:53.766980 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:53.767000 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:53.767013 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:53.844890 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:53.844924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:53.888284 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:53.888318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:51.805011 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.805710 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.460867 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.462350 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.962100 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.845091 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.845602 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:57.848944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:56.442668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:56.455936 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:56.456019 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:56.494363 2152780 cri.go:89] found id: ""
	I0819 13:21:56.494403 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.494414 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:56.494423 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:56.494498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:56.531901 2152780 cri.go:89] found id: ""
	I0819 13:21:56.531933 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.531943 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:56.531950 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:56.532046 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:56.569677 2152780 cri.go:89] found id: ""
	I0819 13:21:56.569706 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.569715 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:56.569721 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:56.569787 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:56.608109 2152780 cri.go:89] found id: ""
	I0819 13:21:56.608136 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.608147 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:56.608153 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:56.608240 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:56.646753 2152780 cri.go:89] found id: ""
	I0819 13:21:56.646791 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.646803 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:56.646810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:56.646879 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:56.686029 2152780 cri.go:89] found id: ""
	I0819 13:21:56.686053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.686061 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:56.686067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:56.686131 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:56.724838 2152780 cri.go:89] found id: ""
	I0819 13:21:56.724874 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.724886 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:56.724895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:56.724963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:56.762368 2152780 cri.go:89] found id: ""
	I0819 13:21:56.762399 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.762407 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:56.762422 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:56.762434 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:56.815047 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:56.815103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:56.830783 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:56.830809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:56.908963 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:56.908990 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:56.909008 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:56.991337 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:56.991372 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:59.532005 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:59.545733 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:59.545823 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:59.584082 2152780 cri.go:89] found id: ""
	I0819 13:21:59.584111 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.584121 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:59.584130 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:59.584216 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:59.618041 2152780 cri.go:89] found id: ""
	I0819 13:21:59.618075 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.618086 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:59.618094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:59.618171 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:59.651720 2152780 cri.go:89] found id: ""
	I0819 13:21:59.651753 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.651764 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:59.651772 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:59.651844 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:59.686447 2152780 cri.go:89] found id: ""
	I0819 13:21:59.686471 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.686479 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:59.686486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:59.686539 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:59.719583 2152780 cri.go:89] found id: ""
	I0819 13:21:59.719623 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.719635 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:59.719643 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:59.719714 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:59.753559 2152780 cri.go:89] found id: ""
	I0819 13:21:59.753593 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.753604 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:59.753613 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:59.753684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:59.789518 2152780 cri.go:89] found id: ""
	I0819 13:21:59.789554 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.789566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:59.789575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:59.789647 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:59.825772 2152780 cri.go:89] found id: ""
	I0819 13:21:59.825801 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.825812 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:59.825823 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:59.825837 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:59.878857 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:59.878899 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:59.894113 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:59.894146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:59.975274 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:59.975301 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:59.975318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:00.056121 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:00.056185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:56.304106 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.804232 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.805308 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.461819 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.961828 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.345176 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.345756 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.596366 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:02.610365 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:02.610439 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:02.646703 2152780 cri.go:89] found id: ""
	I0819 13:22:02.646729 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.646737 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:02.646743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:02.646800 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:02.682311 2152780 cri.go:89] found id: ""
	I0819 13:22:02.682345 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.682357 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:02.682364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:02.682441 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:02.718494 2152780 cri.go:89] found id: ""
	I0819 13:22:02.718525 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.718545 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:02.718551 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:02.718608 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.751952 2152780 cri.go:89] found id: ""
	I0819 13:22:02.751988 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.751999 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:02.752008 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:02.752074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:02.786156 2152780 cri.go:89] found id: ""
	I0819 13:22:02.786189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.786197 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:02.786204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:02.786261 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:02.823080 2152780 cri.go:89] found id: ""
	I0819 13:22:02.823114 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.823126 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:02.823134 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:02.823207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:02.860266 2152780 cri.go:89] found id: ""
	I0819 13:22:02.860299 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.860311 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:02.860320 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:02.860377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:02.903612 2152780 cri.go:89] found id: ""
	I0819 13:22:02.903643 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.903652 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:02.903667 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:02.903680 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:02.984771 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:02.984813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:03.027346 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:03.027378 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:03.077562 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:03.077608 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:03.092135 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:03.092190 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:03.168418 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:05.669468 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:05.682640 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:05.682706 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:05.717755 2152780 cri.go:89] found id: ""
	I0819 13:22:05.717785 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.717796 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:05.717805 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:05.717867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:05.757165 2152780 cri.go:89] found id: ""
	I0819 13:22:05.757191 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.757199 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:05.757204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:05.757265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:05.794991 2152780 cri.go:89] found id: ""
	I0819 13:22:05.795023 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.795035 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:05.795044 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:05.795124 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.805919 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.304915 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.963297 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.459786 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:04.844594 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.344680 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.829634 2152780 cri.go:89] found id: ""
	I0819 13:22:05.829664 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.829673 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:05.829680 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:05.829734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:05.865739 2152780 cri.go:89] found id: ""
	I0819 13:22:05.865767 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.865775 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:05.865781 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:05.865839 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:05.901395 2152780 cri.go:89] found id: ""
	I0819 13:22:05.901425 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.901435 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:05.901443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:05.901641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:05.937162 2152780 cri.go:89] found id: ""
	I0819 13:22:05.937190 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.937200 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:05.937207 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:05.937284 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:05.974302 2152780 cri.go:89] found id: ""
	I0819 13:22:05.974332 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.974342 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:05.974353 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:05.974367 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:06.012635 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:06.012664 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:06.064093 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:06.064129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:06.077792 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:06.077825 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:06.148954 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:06.148980 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:06.148999 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:08.731539 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:08.747311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:08.747414 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:08.782881 2152780 cri.go:89] found id: ""
	I0819 13:22:08.782913 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.782921 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:08.782927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:08.782990 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:08.817533 2152780 cri.go:89] found id: ""
	I0819 13:22:08.817569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.817580 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:08.817587 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:08.817657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:08.855614 2152780 cri.go:89] found id: ""
	I0819 13:22:08.855650 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.855664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:08.855672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:08.855742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:08.896405 2152780 cri.go:89] found id: ""
	I0819 13:22:08.896439 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.896447 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:08.896455 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:08.896525 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:08.936250 2152780 cri.go:89] found id: ""
	I0819 13:22:08.936279 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.936288 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:08.936296 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:08.936366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:08.976325 2152780 cri.go:89] found id: ""
	I0819 13:22:08.976357 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.976367 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:08.976376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:08.976446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:09.012921 2152780 cri.go:89] found id: ""
	I0819 13:22:09.012953 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.012962 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:09.012967 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:09.013026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:09.051385 2152780 cri.go:89] found id: ""
	I0819 13:22:09.051420 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.051432 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:09.051444 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:09.051461 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:09.105101 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:09.105146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:09.120289 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:09.120331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:09.216182 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:09.216220 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:09.216238 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:09.302832 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:09.302876 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:07.307100 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.803522 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.461843 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.961972 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.345855 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.845869 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.848734 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:11.861754 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:11.861838 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:11.897908 2152780 cri.go:89] found id: ""
	I0819 13:22:11.897940 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.897949 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:11.897956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:11.898013 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:11.935276 2152780 cri.go:89] found id: ""
	I0819 13:22:11.935309 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.935318 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:11.935326 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:11.935386 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:11.978577 2152780 cri.go:89] found id: ""
	I0819 13:22:11.978608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.978619 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:11.978626 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:11.978697 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:12.018464 2152780 cri.go:89] found id: ""
	I0819 13:22:12.018496 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.018505 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:12.018511 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:12.018567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:12.062371 2152780 cri.go:89] found id: ""
	I0819 13:22:12.062402 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.062412 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:12.062418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:12.062476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:12.102988 2152780 cri.go:89] found id: ""
	I0819 13:22:12.103017 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.103026 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:12.103033 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:12.103088 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:12.143553 2152780 cri.go:89] found id: ""
	I0819 13:22:12.143581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.143591 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:12.143599 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:12.143660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:12.187155 2152780 cri.go:89] found id: ""
	I0819 13:22:12.187192 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.187211 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:12.187224 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:12.187251 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:12.236028 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:12.236069 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:12.250047 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:12.250084 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:12.320586 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:12.320612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:12.320624 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:12.400274 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:12.400320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:14.939735 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:14.953283 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:14.953348 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:14.993818 2152780 cri.go:89] found id: ""
	I0819 13:22:14.993851 2152780 logs.go:276] 0 containers: []
	W0819 13:22:14.993862 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:14.993870 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:14.993940 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:15.035966 2152780 cri.go:89] found id: ""
	I0819 13:22:15.035997 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.036005 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:15.036012 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:15.036073 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:15.070730 2152780 cri.go:89] found id: ""
	I0819 13:22:15.070765 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.070777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:15.070785 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:15.070855 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:15.111830 2152780 cri.go:89] found id: ""
	I0819 13:22:15.111859 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.111868 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:15.111874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:15.111928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:15.144013 2152780 cri.go:89] found id: ""
	I0819 13:22:15.144039 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.144047 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:15.144053 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:15.144107 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:15.178050 2152780 cri.go:89] found id: ""
	I0819 13:22:15.178074 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.178083 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:15.178090 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:15.178145 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:15.212622 2152780 cri.go:89] found id: ""
	I0819 13:22:15.212647 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.212656 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:15.212662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:15.212716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:15.244744 2152780 cri.go:89] found id: ""
	I0819 13:22:15.244772 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.244780 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:15.244789 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:15.244804 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:15.284232 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:15.284266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:15.336738 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:15.336774 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:15.350801 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:15.350835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:15.426424 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:15.426453 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:15.426469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:11.809160 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.303978 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.962746 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:13.964030 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.344847 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.344951 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.009087 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:18.022428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:18.022500 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:18.062900 2152780 cri.go:89] found id: ""
	I0819 13:22:18.062930 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.062941 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:18.062949 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:18.063021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:18.096711 2152780 cri.go:89] found id: ""
	I0819 13:22:18.096752 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.096764 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:18.096773 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:18.096846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:18.129199 2152780 cri.go:89] found id: ""
	I0819 13:22:18.129226 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.129246 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:18.129253 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:18.129323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:18.161907 2152780 cri.go:89] found id: ""
	I0819 13:22:18.161944 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.161955 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:18.161963 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:18.162036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:18.198514 2152780 cri.go:89] found id: ""
	I0819 13:22:18.198557 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.198570 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:18.198577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:18.198648 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:18.230114 2152780 cri.go:89] found id: ""
	I0819 13:22:18.230142 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.230152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:18.230160 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:18.230221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:18.265610 2152780 cri.go:89] found id: ""
	I0819 13:22:18.265642 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.265654 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:18.265660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:18.265729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:18.298889 2152780 cri.go:89] found id: ""
	I0819 13:22:18.298929 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.298941 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:18.298952 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:18.298967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:18.381124 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:18.381165 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:18.422361 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:18.422394 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:18.482182 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:18.482215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:18.496054 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:18.496086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:18.568459 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:16.305471 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.804592 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.461760 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.961000 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:20.961568 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.844928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.345851 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.069389 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:21.082568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:21.082636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:21.117124 2152780 cri.go:89] found id: ""
	I0819 13:22:21.117158 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.117171 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:21.117180 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:21.117272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:21.151954 2152780 cri.go:89] found id: ""
	I0819 13:22:21.151981 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.151990 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:21.151996 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:21.152047 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:21.187411 2152780 cri.go:89] found id: ""
	I0819 13:22:21.187443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.187452 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:21.187461 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:21.187528 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:21.224118 2152780 cri.go:89] found id: ""
	I0819 13:22:21.224148 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.224156 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:21.224173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:21.224247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:21.261153 2152780 cri.go:89] found id: ""
	I0819 13:22:21.261189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.261198 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:21.261205 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:21.261274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:21.305404 2152780 cri.go:89] found id: ""
	I0819 13:22:21.305433 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.305444 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:21.305453 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:21.305534 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:21.358428 2152780 cri.go:89] found id: ""
	I0819 13:22:21.358462 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.358474 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:21.358482 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:21.358561 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:21.410971 2152780 cri.go:89] found id: ""
	I0819 13:22:21.411005 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.411017 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:21.411029 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:21.411045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:21.462929 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:21.462963 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:21.476419 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:21.476446 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:21.544095 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.544116 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:21.544130 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:21.622802 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:21.622841 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.171988 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:24.185545 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:24.185624 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:24.221604 2152780 cri.go:89] found id: ""
	I0819 13:22:24.221632 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.221640 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:24.221646 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:24.221703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:24.255866 2152780 cri.go:89] found id: ""
	I0819 13:22:24.255903 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.255914 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:24.255923 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:24.255993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:24.290338 2152780 cri.go:89] found id: ""
	I0819 13:22:24.290381 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.290393 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:24.290401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:24.290465 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:24.325653 2152780 cri.go:89] found id: ""
	I0819 13:22:24.325685 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.325693 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:24.325700 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:24.325753 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:24.364081 2152780 cri.go:89] found id: ""
	I0819 13:22:24.364116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.364128 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:24.364136 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:24.364211 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:24.403464 2152780 cri.go:89] found id: ""
	I0819 13:22:24.403504 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.403516 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:24.403524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:24.403597 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:24.441205 2152780 cri.go:89] found id: ""
	I0819 13:22:24.441244 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.441256 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:24.441264 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:24.441340 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:24.478294 2152780 cri.go:89] found id: ""
	I0819 13:22:24.478324 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.478332 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:24.478341 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:24.478354 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:24.562969 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:24.563012 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.609296 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:24.609332 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:24.665537 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:24.665579 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:24.680086 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:24.680128 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:24.743465 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.305466 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.804623 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.461141 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.461784 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.844225 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.844485 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.844552 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.244137 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:27.257619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:27.257695 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:27.290229 2152780 cri.go:89] found id: ""
	I0819 13:22:27.290265 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.290274 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:27.290280 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:27.290354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:27.327185 2152780 cri.go:89] found id: ""
	I0819 13:22:27.327216 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.327225 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:27.327231 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:27.327302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:27.362949 2152780 cri.go:89] found id: ""
	I0819 13:22:27.362978 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.362999 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:27.363007 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:27.363074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:27.404085 2152780 cri.go:89] found id: ""
	I0819 13:22:27.404116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.404125 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:27.404132 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:27.404237 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:27.439884 2152780 cri.go:89] found id: ""
	I0819 13:22:27.439922 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.439933 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:27.439942 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:27.440006 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:27.473933 2152780 cri.go:89] found id: ""
	I0819 13:22:27.473961 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.473970 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:27.473976 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:27.474033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:27.510093 2152780 cri.go:89] found id: ""
	I0819 13:22:27.510121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.510131 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:27.510137 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:27.510188 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:27.549178 2152780 cri.go:89] found id: ""
	I0819 13:22:27.549205 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.549213 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:27.549223 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:27.549236 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:27.605886 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:27.605925 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:27.619506 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:27.619537 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:27.686821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:27.686849 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:27.686867 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:27.761275 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:27.761315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:30.301382 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:30.315140 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:30.315221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:30.352687 2152780 cri.go:89] found id: ""
	I0819 13:22:30.352710 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.352717 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:30.352723 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:30.352775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:30.388577 2152780 cri.go:89] found id: ""
	I0819 13:22:30.388604 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.388612 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:30.388618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:30.388671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:30.423937 2152780 cri.go:89] found id: ""
	I0819 13:22:30.423967 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.423977 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:30.423983 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:30.424039 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:30.456789 2152780 cri.go:89] found id: ""
	I0819 13:22:30.456823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.456834 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:30.456852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:30.456928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:30.495741 2152780 cri.go:89] found id: ""
	I0819 13:22:30.495778 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.495792 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:30.495801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:30.495856 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:30.532919 2152780 cri.go:89] found id: ""
	I0819 13:22:30.532947 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.532959 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:30.532972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:30.533036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:30.574556 2152780 cri.go:89] found id: ""
	I0819 13:22:30.574591 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.574602 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:30.574615 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:30.574685 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:30.610248 2152780 cri.go:89] found id: ""
	I0819 13:22:30.610280 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.610291 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:30.610303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:30.610319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:30.664302 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:30.664356 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:30.678529 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:30.678571 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:30.753412 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:30.753441 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:30.753460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:26.304403 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:28.305270 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.804282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.462817 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:29.463222 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.344268 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:32.344384 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.831754 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:30.831795 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:33.375305 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:33.390064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:33.390139 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:33.426250 2152780 cri.go:89] found id: ""
	I0819 13:22:33.426281 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.426292 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:33.426301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:33.426356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:33.464229 2152780 cri.go:89] found id: ""
	I0819 13:22:33.464255 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.464263 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:33.464270 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:33.464329 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:33.501098 2152780 cri.go:89] found id: ""
	I0819 13:22:33.501134 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.501145 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:33.501154 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:33.501223 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:33.535617 2152780 cri.go:89] found id: ""
	I0819 13:22:33.535651 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.535662 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:33.535670 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:33.535738 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:33.570553 2152780 cri.go:89] found id: ""
	I0819 13:22:33.570600 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.570613 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:33.570622 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:33.570694 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:33.605414 2152780 cri.go:89] found id: ""
	I0819 13:22:33.605454 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.605466 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:33.605477 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:33.605545 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:33.637564 2152780 cri.go:89] found id: ""
	I0819 13:22:33.637599 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.637610 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:33.637618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:33.637681 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:33.671403 2152780 cri.go:89] found id: ""
	I0819 13:22:33.671438 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.671450 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:33.671463 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:33.671482 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:33.724121 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:33.724159 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:33.737041 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:33.737070 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:33.808305 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:33.808329 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:33.808342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:33.886089 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:33.886136 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:32.806000 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:35.303503 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:31.463294 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:33.962798 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:34.344950 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.845547 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.424035 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:36.439329 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:36.439407 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:36.486348 2152780 cri.go:89] found id: ""
	I0819 13:22:36.486375 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.486383 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:36.486391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:36.486455 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:36.522796 2152780 cri.go:89] found id: ""
	I0819 13:22:36.522823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.522834 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:36.522843 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:36.522909 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:36.556528 2152780 cri.go:89] found id: ""
	I0819 13:22:36.556561 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.556573 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:36.556580 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:36.556652 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:36.592545 2152780 cri.go:89] found id: ""
	I0819 13:22:36.592581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.592591 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:36.592600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:36.592671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:36.628668 2152780 cri.go:89] found id: ""
	I0819 13:22:36.628696 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.628705 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:36.628711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:36.628765 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:36.660783 2152780 cri.go:89] found id: ""
	I0819 13:22:36.660817 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.660825 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:36.660832 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:36.660886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:36.694853 2152780 cri.go:89] found id: ""
	I0819 13:22:36.694883 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.694893 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:36.694901 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:36.694969 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:36.727767 2152780 cri.go:89] found id: ""
	I0819 13:22:36.727797 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.727805 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:36.727816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:36.727830 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:36.741032 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:36.741064 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:36.816015 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:36.816045 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:36.816062 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:36.896272 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:36.896319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:36.933895 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:36.933936 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:39.495808 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:39.511071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:39.511158 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:39.551412 2152780 cri.go:89] found id: ""
	I0819 13:22:39.551447 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.551458 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:39.551467 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:39.551523 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:39.588796 2152780 cri.go:89] found id: ""
	I0819 13:22:39.588827 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.588836 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:39.588842 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:39.588904 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:39.627115 2152780 cri.go:89] found id: ""
	I0819 13:22:39.627144 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.627153 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:39.627159 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:39.627222 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:39.662372 2152780 cri.go:89] found id: ""
	I0819 13:22:39.662405 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.662413 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:39.662420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:39.662478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:39.696239 2152780 cri.go:89] found id: ""
	I0819 13:22:39.696272 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.696282 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:39.696298 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:39.696363 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:39.731334 2152780 cri.go:89] found id: ""
	I0819 13:22:39.731372 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.731383 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:39.731391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:39.731454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:39.764368 2152780 cri.go:89] found id: ""
	I0819 13:22:39.764400 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.764412 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:39.764420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:39.764488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:39.798994 2152780 cri.go:89] found id: ""
	I0819 13:22:39.799026 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.799038 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:39.799050 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:39.799066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:39.812935 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:39.812965 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:39.882867 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:39.882898 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:39.882913 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:39.968067 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:39.968102 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:40.007122 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:40.007158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:37.304584 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.305417 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.462377 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:38.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.344771 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.355221 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:42.560811 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:42.574968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:42.575034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:42.609476 2152780 cri.go:89] found id: ""
	I0819 13:22:42.609516 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.609529 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:42.609537 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:42.609607 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:42.650763 2152780 cri.go:89] found id: ""
	I0819 13:22:42.650794 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.650804 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:42.650812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:42.650885 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:42.686330 2152780 cri.go:89] found id: ""
	I0819 13:22:42.686358 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.686369 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:42.686377 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:42.686446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:42.720937 2152780 cri.go:89] found id: ""
	I0819 13:22:42.720963 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.720971 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:42.720982 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:42.721036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:42.755079 2152780 cri.go:89] found id: ""
	I0819 13:22:42.755122 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.755132 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:42.755141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:42.755212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:42.787680 2152780 cri.go:89] found id: ""
	I0819 13:22:42.787715 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.787727 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:42.787738 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:42.787807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:42.824768 2152780 cri.go:89] found id: ""
	I0819 13:22:42.824799 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.824807 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:42.824813 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:42.824874 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:42.859902 2152780 cri.go:89] found id: ""
	I0819 13:22:42.859935 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.859946 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:42.859957 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:42.859972 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:42.911169 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:42.911215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:42.925108 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:42.925140 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:43.002506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:43.002557 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:43.002574 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:43.084408 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:43.084451 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:45.625360 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:45.638948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:45.639033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:45.675838 2152780 cri.go:89] found id: ""
	I0819 13:22:45.675874 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.675887 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:45.675895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:45.675971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:45.708726 2152780 cri.go:89] found id: ""
	I0819 13:22:45.708757 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.708768 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:45.708776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:45.708842 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:45.749704 2152780 cri.go:89] found id: ""
	I0819 13:22:45.749739 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.749752 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:45.749769 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:45.749835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:45.786426 2152780 cri.go:89] found id: ""
	I0819 13:22:45.786457 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.786469 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:45.786478 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:45.786547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:41.803829 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.804465 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.805120 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.462336 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.961582 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.845489 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.845943 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.822363 2152780 cri.go:89] found id: ""
	I0819 13:22:45.822395 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.822403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:45.822408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:45.822472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:45.856308 2152780 cri.go:89] found id: ""
	I0819 13:22:45.856337 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.856346 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:45.856353 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:45.856421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:45.893379 2152780 cri.go:89] found id: ""
	I0819 13:22:45.893417 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.893427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:45.893435 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:45.893503 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:45.929841 2152780 cri.go:89] found id: ""
	I0819 13:22:45.929868 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.929879 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:45.929890 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:45.929943 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:45.980392 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:45.980432 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:45.994781 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:45.994809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:46.065633 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:46.065658 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:46.065675 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:46.140938 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:46.140976 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.680055 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:48.692972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:48.693053 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:48.736198 2152780 cri.go:89] found id: ""
	I0819 13:22:48.736232 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.736244 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:48.736252 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:48.736332 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:48.775819 2152780 cri.go:89] found id: ""
	I0819 13:22:48.775855 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.775872 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:48.775881 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:48.775956 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:48.814090 2152780 cri.go:89] found id: ""
	I0819 13:22:48.814121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.814130 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:48.814139 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:48.814207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:48.849630 2152780 cri.go:89] found id: ""
	I0819 13:22:48.849656 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.849665 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:48.849671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:48.849730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:48.887945 2152780 cri.go:89] found id: ""
	I0819 13:22:48.887982 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.887995 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:48.888003 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:48.888072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:48.924105 2152780 cri.go:89] found id: ""
	I0819 13:22:48.924145 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.924158 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:48.924184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:48.924266 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:48.960481 2152780 cri.go:89] found id: ""
	I0819 13:22:48.960521 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.960532 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:48.960540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:48.960611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:48.998826 2152780 cri.go:89] found id: ""
	I0819 13:22:48.998856 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.998865 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:48.998874 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:48.998888 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:49.053064 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:49.053103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:49.067115 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:49.067154 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:49.146731 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:49.146753 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:49.146767 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:49.232228 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:49.232268 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.303393 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.305013 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:46.461905 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.961570 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.961743 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.344068 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.844551 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:51.775692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:51.789766 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:51.789848 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:51.830997 2152780 cri.go:89] found id: ""
	I0819 13:22:51.831029 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.831038 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:51.831045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:51.831105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:51.870122 2152780 cri.go:89] found id: ""
	I0819 13:22:51.870150 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.870160 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:51.870168 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:51.870247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:51.911170 2152780 cri.go:89] found id: ""
	I0819 13:22:51.911198 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.911206 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:51.911216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:51.911280 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:51.946159 2152780 cri.go:89] found id: ""
	I0819 13:22:51.946197 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.946218 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:51.946225 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:51.946300 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:51.981131 2152780 cri.go:89] found id: ""
	I0819 13:22:51.981160 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.981170 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:51.981176 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:51.981251 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:52.024417 2152780 cri.go:89] found id: ""
	I0819 13:22:52.024456 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.024477 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:52.024486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:52.024567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:52.062519 2152780 cri.go:89] found id: ""
	I0819 13:22:52.062552 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.062563 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:52.062571 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:52.062640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:52.106259 2152780 cri.go:89] found id: ""
	I0819 13:22:52.106291 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.106302 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:52.106314 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:52.106338 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:52.158681 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:52.158723 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:52.172489 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:52.172523 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:52.236862 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:52.236892 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:52.236910 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:52.319287 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:52.319331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:54.864061 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:54.878753 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:54.878834 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:54.913195 2152780 cri.go:89] found id: ""
	I0819 13:22:54.913225 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.913240 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:54.913247 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:54.913314 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:54.954867 2152780 cri.go:89] found id: ""
	I0819 13:22:54.954909 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.954922 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:54.954930 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:54.954995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:54.992570 2152780 cri.go:89] found id: ""
	I0819 13:22:54.992601 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.992610 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:54.992616 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:54.992683 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:55.027056 2152780 cri.go:89] found id: ""
	I0819 13:22:55.027090 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.027101 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:55.027108 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:55.027190 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:55.061941 2152780 cri.go:89] found id: ""
	I0819 13:22:55.061973 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.061985 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:55.061994 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:55.062061 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:55.100414 2152780 cri.go:89] found id: ""
	I0819 13:22:55.100443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.100455 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:55.100465 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:55.100535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:55.143797 2152780 cri.go:89] found id: ""
	I0819 13:22:55.143836 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.143848 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:55.143856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:55.143929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:55.187540 2152780 cri.go:89] found id: ""
	I0819 13:22:55.187569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.187577 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:55.187587 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:55.187600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:55.241075 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:55.241110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:55.254813 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:55.254846 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:55.325934 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:55.325955 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:55.325974 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:55.404725 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:55.404775 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:52.804423 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.303884 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:52.963783 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.461517 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:53.345242 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.346054 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.843535 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.949501 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:57.964664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:57.964730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:58.005114 2152780 cri.go:89] found id: ""
	I0819 13:22:58.005153 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.005164 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:58.005173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:58.005274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:58.038596 2152780 cri.go:89] found id: ""
	I0819 13:22:58.038629 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.038639 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:58.038644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:58.038700 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:58.081597 2152780 cri.go:89] found id: ""
	I0819 13:22:58.081631 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.081640 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:58.081650 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:58.081735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:58.120356 2152780 cri.go:89] found id: ""
	I0819 13:22:58.120389 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.120400 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:58.120408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:58.120478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:58.156313 2152780 cri.go:89] found id: ""
	I0819 13:22:58.156346 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.156358 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:58.156366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:58.156442 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:58.192198 2152780 cri.go:89] found id: ""
	I0819 13:22:58.192238 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.192249 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:58.192257 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:58.192335 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:58.226574 2152780 cri.go:89] found id: ""
	I0819 13:22:58.226608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.226622 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:58.226631 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:58.226705 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:58.260060 2152780 cri.go:89] found id: ""
	I0819 13:22:58.260088 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.260096 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:58.260106 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:58.260119 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:58.338242 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:58.338289 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:58.382245 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:58.382286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:58.436273 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:58.436314 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:58.452143 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:58.452215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:58.530133 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:57.305258 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.804142 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.462332 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.962480 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.844922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:01.030732 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:01.044083 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:01.044155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:01.083082 2152780 cri.go:89] found id: ""
	I0819 13:23:01.083114 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.083122 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:01.083129 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:01.083197 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:01.122645 2152780 cri.go:89] found id: ""
	I0819 13:23:01.122680 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.122700 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:01.122708 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:01.122777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:01.163144 2152780 cri.go:89] found id: ""
	I0819 13:23:01.163175 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.163187 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:01.163194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:01.163262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:01.201545 2152780 cri.go:89] found id: ""
	I0819 13:23:01.201587 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.201597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:01.201605 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:01.201693 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:01.237047 2152780 cri.go:89] found id: ""
	I0819 13:23:01.237075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.237086 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:01.237094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:01.237162 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:01.276531 2152780 cri.go:89] found id: ""
	I0819 13:23:01.276560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.276571 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:01.276577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:01.276633 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:01.315639 2152780 cri.go:89] found id: ""
	I0819 13:23:01.315668 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.315676 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:01.315682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:01.315734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:01.351942 2152780 cri.go:89] found id: ""
	I0819 13:23:01.351969 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.351980 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:01.351990 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:01.352006 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:01.403068 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:01.403111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:01.416608 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:01.416639 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:01.493450 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:01.493483 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:01.493501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:01.577709 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:01.577750 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:04.146865 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:04.160366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:04.160431 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:04.194463 2152780 cri.go:89] found id: ""
	I0819 13:23:04.194494 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.194504 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:04.194510 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:04.194567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:04.228327 2152780 cri.go:89] found id: ""
	I0819 13:23:04.228351 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.228359 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:04.228364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:04.228413 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:04.261613 2152780 cri.go:89] found id: ""
	I0819 13:23:04.261645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.261656 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:04.261664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:04.261740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:04.294583 2152780 cri.go:89] found id: ""
	I0819 13:23:04.294617 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.294628 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:04.294636 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:04.294712 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:04.328537 2152780 cri.go:89] found id: ""
	I0819 13:23:04.328566 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.328576 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:04.328585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:04.328657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:04.361865 2152780 cri.go:89] found id: ""
	I0819 13:23:04.361906 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.361917 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:04.361925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:04.361988 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:04.396308 2152780 cri.go:89] found id: ""
	I0819 13:23:04.396343 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.396354 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:04.396361 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:04.396433 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:04.429551 2152780 cri.go:89] found id: ""
	I0819 13:23:04.429580 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.429591 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:04.429602 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:04.429618 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:04.479557 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:04.479600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:04.493746 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:04.493781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:04.563309 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:04.563333 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:04.563347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:04.644413 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:04.644455 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:02.305047 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.804564 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.460986 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.461977 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.845028 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.345778 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.195678 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:07.209255 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:07.209323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:07.245267 2152780 cri.go:89] found id: ""
	I0819 13:23:07.245295 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.245306 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:07.245324 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:07.245394 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:07.283190 2152780 cri.go:89] found id: ""
	I0819 13:23:07.283223 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.283234 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:07.283242 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:07.283312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:07.330025 2152780 cri.go:89] found id: ""
	I0819 13:23:07.330053 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.330065 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:07.330072 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:07.330137 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:07.381649 2152780 cri.go:89] found id: ""
	I0819 13:23:07.381675 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.381684 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:07.381690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:07.381744 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:07.436759 2152780 cri.go:89] found id: ""
	I0819 13:23:07.436794 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.436806 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:07.436814 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:07.436890 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:07.475843 2152780 cri.go:89] found id: ""
	I0819 13:23:07.475876 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.475885 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:07.475891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:07.475954 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:07.510531 2152780 cri.go:89] found id: ""
	I0819 13:23:07.510560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.510569 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:07.510575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:07.510641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:07.546041 2152780 cri.go:89] found id: ""
	I0819 13:23:07.546075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.546087 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:07.546100 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:07.546118 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:07.559102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:07.559137 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:07.631002 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:07.631030 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:07.631045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:07.705711 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:07.705751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:07.745114 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:07.745148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.302515 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:10.316028 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:10.316108 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:10.357431 2152780 cri.go:89] found id: ""
	I0819 13:23:10.357468 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.357477 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:10.357483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:10.357546 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:10.393541 2152780 cri.go:89] found id: ""
	I0819 13:23:10.393568 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.393577 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:10.393583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:10.393651 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:10.429651 2152780 cri.go:89] found id: ""
	I0819 13:23:10.429684 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.429692 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:10.429698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:10.429752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:10.469626 2152780 cri.go:89] found id: ""
	I0819 13:23:10.469690 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.469701 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:10.469706 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:10.469761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:10.503015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.503048 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.503058 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:10.503065 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:10.503133 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:10.538127 2152780 cri.go:89] found id: ""
	I0819 13:23:10.538168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.538180 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:10.538188 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:10.538260 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:10.573015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.573050 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.573058 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:10.573064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:10.573134 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:10.608094 2152780 cri.go:89] found id: ""
	I0819 13:23:10.608125 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.608136 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:10.608150 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:10.608185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:10.688926 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:10.688967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:10.724554 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:10.724588 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.777619 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:10.777652 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:10.791106 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:10.791143 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:23:07.305198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.804759 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:06.961224 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.460095 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.843726 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.845072 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	W0819 13:23:10.865964 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.366960 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:13.381903 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:13.381973 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:13.420589 2152780 cri.go:89] found id: ""
	I0819 13:23:13.420620 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.420630 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:13.420637 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:13.420724 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:13.457809 2152780 cri.go:89] found id: ""
	I0819 13:23:13.457840 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.457850 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:13.457863 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:13.457927 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:13.496310 2152780 cri.go:89] found id: ""
	I0819 13:23:13.496347 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.496359 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:13.496368 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:13.496436 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:13.532057 2152780 cri.go:89] found id: ""
	I0819 13:23:13.532089 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.532097 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:13.532103 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:13.532183 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:13.575780 2152780 cri.go:89] found id: ""
	I0819 13:23:13.575814 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.575823 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:13.575829 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:13.575896 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:13.611484 2152780 cri.go:89] found id: ""
	I0819 13:23:13.611510 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.611520 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:13.611528 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:13.611594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:13.647456 2152780 cri.go:89] found id: ""
	I0819 13:23:13.647483 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.647491 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:13.647497 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:13.647569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:13.681181 2152780 cri.go:89] found id: ""
	I0819 13:23:13.681212 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.681224 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:13.681237 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:13.681266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:13.694680 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:13.694709 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:13.767546 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.767576 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:13.767597 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:13.851609 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:13.851649 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:13.893739 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:13.893778 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:12.304391 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.304840 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.962978 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.461299 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.344830 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.844720 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.446686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:16.459850 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:16.459928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:16.498671 2152780 cri.go:89] found id: ""
	I0819 13:23:16.498703 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.498712 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:16.498718 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:16.498775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:16.534655 2152780 cri.go:89] found id: ""
	I0819 13:23:16.534691 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.534704 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:16.534716 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:16.534781 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:16.571279 2152780 cri.go:89] found id: ""
	I0819 13:23:16.571317 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.571330 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:16.571337 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:16.571402 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:16.604843 2152780 cri.go:89] found id: ""
	I0819 13:23:16.604878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.604889 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:16.604897 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:16.604963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:16.641150 2152780 cri.go:89] found id: ""
	I0819 13:23:16.641181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.641190 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:16.641201 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:16.641272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:16.676737 2152780 cri.go:89] found id: ""
	I0819 13:23:16.676763 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.676773 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:16.676782 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:16.676847 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:16.709876 2152780 cri.go:89] found id: ""
	I0819 13:23:16.709908 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.709920 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:16.709928 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:16.709991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:16.745249 2152780 cri.go:89] found id: ""
	I0819 13:23:16.745277 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.745293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:16.745304 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:16.745320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:16.796364 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:16.796400 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:16.811859 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:16.811887 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:16.886118 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:16.886144 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:16.886158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.968304 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:16.968349 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.511448 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:19.523952 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:19.524023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:19.556661 2152780 cri.go:89] found id: ""
	I0819 13:23:19.556696 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.556708 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:19.556715 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:19.556784 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:19.595956 2152780 cri.go:89] found id: ""
	I0819 13:23:19.595986 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.595995 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:19.596001 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:19.596055 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:19.631993 2152780 cri.go:89] found id: ""
	I0819 13:23:19.632021 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.632030 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:19.632038 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:19.632126 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:19.667871 2152780 cri.go:89] found id: ""
	I0819 13:23:19.667924 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.667935 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:19.667944 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:19.668007 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:19.703585 2152780 cri.go:89] found id: ""
	I0819 13:23:19.703615 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.703624 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:19.703630 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:19.703684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:19.737220 2152780 cri.go:89] found id: ""
	I0819 13:23:19.737252 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.737263 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:19.737272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:19.737343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:19.775526 2152780 cri.go:89] found id: ""
	I0819 13:23:19.775570 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.775587 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:19.775595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:19.775671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:19.810611 2152780 cri.go:89] found id: ""
	I0819 13:23:19.810645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.810656 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:19.810667 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:19.810682 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.855138 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:19.855167 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:19.906988 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:19.907025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:19.920451 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:19.920481 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:19.989054 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:19.989086 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:19.989101 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.803686 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.804090 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.804422 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.463889 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.961085 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.961248 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.845050 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:21.344704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.571624 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:22.585585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:22.585667 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:22.620456 2152780 cri.go:89] found id: ""
	I0819 13:23:22.620488 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.620497 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:22.620509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:22.620562 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:22.658971 2152780 cri.go:89] found id: ""
	I0819 13:23:22.659005 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.659017 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:22.659024 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:22.659094 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:22.696236 2152780 cri.go:89] found id: ""
	I0819 13:23:22.696269 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.696281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:22.696290 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:22.696354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:22.735260 2152780 cri.go:89] found id: ""
	I0819 13:23:22.735296 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.735306 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:22.735313 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:22.735365 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.770593 2152780 cri.go:89] found id: ""
	I0819 13:23:22.770625 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.770637 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:22.770644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:22.770716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:22.806348 2152780 cri.go:89] found id: ""
	I0819 13:23:22.806375 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.806400 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:22.806418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:22.806489 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:22.843812 2152780 cri.go:89] found id: ""
	I0819 13:23:22.843841 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.843850 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:22.843858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:22.843929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:22.880607 2152780 cri.go:89] found id: ""
	I0819 13:23:22.880641 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.880655 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:22.880669 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:22.880695 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:22.930816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:22.930855 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:22.946795 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:22.946829 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:23.021065 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:23.021094 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:23.021110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:23.098083 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:23.098129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:25.641104 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:25.654226 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:25.654297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:25.689403 2152780 cri.go:89] found id: ""
	I0819 13:23:25.689429 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.689437 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:25.689445 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:25.689496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:25.725340 2152780 cri.go:89] found id: ""
	I0819 13:23:25.725380 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.725393 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:25.725401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:25.725472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:25.759133 2152780 cri.go:89] found id: ""
	I0819 13:23:25.759168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.759178 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:25.759184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:25.759253 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:25.793605 2152780 cri.go:89] found id: ""
	I0819 13:23:25.793636 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.793649 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:25.793657 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:25.793731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.804810 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.305119 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.460663 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:23.344790 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.344975 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.843772 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.828355 2152780 cri.go:89] found id: ""
	I0819 13:23:25.828391 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.828403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:25.828411 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:25.828488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:25.869621 2152780 cri.go:89] found id: ""
	I0819 13:23:25.869658 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.869669 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:25.869676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:25.869731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:25.907749 2152780 cri.go:89] found id: ""
	I0819 13:23:25.907786 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.907795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:25.907801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:25.907870 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:25.942443 2152780 cri.go:89] found id: ""
	I0819 13:23:25.942472 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.942484 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:25.942497 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:25.942515 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:26.016016 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:26.016044 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:26.016058 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:26.097427 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:26.097475 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:26.140275 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:26.140320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:26.189858 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:26.189894 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:28.705027 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:28.717999 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:28.718071 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:28.752233 2152780 cri.go:89] found id: ""
	I0819 13:23:28.752263 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.752279 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:28.752286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:28.752343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:28.786033 2152780 cri.go:89] found id: ""
	I0819 13:23:28.786071 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.786080 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:28.786086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:28.786143 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:28.821105 2152780 cri.go:89] found id: ""
	I0819 13:23:28.821135 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.821165 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:28.821173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:28.821243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:28.859179 2152780 cri.go:89] found id: ""
	I0819 13:23:28.859226 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.859237 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:28.859244 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:28.859319 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:28.894213 2152780 cri.go:89] found id: ""
	I0819 13:23:28.894250 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.894263 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:28.894272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:28.894351 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:28.927807 2152780 cri.go:89] found id: ""
	I0819 13:23:28.927836 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.927848 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:28.927856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:28.927924 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:28.965430 2152780 cri.go:89] found id: ""
	I0819 13:23:28.965464 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.965476 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:28.965485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:28.965555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:29.008894 2152780 cri.go:89] found id: ""
	I0819 13:23:29.008920 2152780 logs.go:276] 0 containers: []
	W0819 13:23:29.008928 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:29.008938 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:29.008951 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:29.089617 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:29.089644 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:29.089657 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:29.173271 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:29.173310 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:29.216057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:29.216098 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:29.267076 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:29.267131 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:27.804872 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:30.304909 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.461788 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.961548 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.844049 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.846007 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.782574 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:31.797583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:31.797655 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:31.836889 2152780 cri.go:89] found id: ""
	I0819 13:23:31.836926 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.836939 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:31.836948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:31.837021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:31.875693 2152780 cri.go:89] found id: ""
	I0819 13:23:31.875725 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.875734 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:31.875740 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:31.875813 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:31.909847 2152780 cri.go:89] found id: ""
	I0819 13:23:31.909878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.909887 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:31.909895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:31.909971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:31.944122 2152780 cri.go:89] found id: ""
	I0819 13:23:31.944155 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.944184 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:31.944193 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:31.944265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:31.979342 2152780 cri.go:89] found id: ""
	I0819 13:23:31.979373 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.979383 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:31.979390 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:31.979461 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:32.013001 2152780 cri.go:89] found id: ""
	I0819 13:23:32.013049 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.013060 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:32.013067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:32.013148 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:32.047975 2152780 cri.go:89] found id: ""
	I0819 13:23:32.048003 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.048012 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:32.048017 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:32.048074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:32.082151 2152780 cri.go:89] found id: ""
	I0819 13:23:32.082181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.082192 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:32.082205 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:32.082241 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:32.120254 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:32.120286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.171594 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:32.171634 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:32.185654 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:32.185687 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:32.251542 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:32.251568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:32.251586 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:34.831302 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:34.846902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:34.846980 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:34.883697 2152780 cri.go:89] found id: ""
	I0819 13:23:34.883731 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.883747 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:34.883756 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:34.883821 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:34.918184 2152780 cri.go:89] found id: ""
	I0819 13:23:34.918220 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.918232 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:34.918240 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:34.918321 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:34.951621 2152780 cri.go:89] found id: ""
	I0819 13:23:34.951653 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.951664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:34.951673 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:34.951735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:34.987330 2152780 cri.go:89] found id: ""
	I0819 13:23:34.987362 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.987373 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:34.987381 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:34.987454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:35.022480 2152780 cri.go:89] found id: ""
	I0819 13:23:35.022518 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.022531 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:35.022539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:35.022606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:35.059717 2152780 cri.go:89] found id: ""
	I0819 13:23:35.059753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.059769 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:35.059776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:35.059835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:35.098877 2152780 cri.go:89] found id: ""
	I0819 13:23:35.098912 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.098924 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:35.098932 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:35.098995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:35.137392 2152780 cri.go:89] found id: ""
	I0819 13:23:35.137422 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.137429 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:35.137438 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:35.137456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:35.151134 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:35.151177 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:35.216821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:35.216852 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:35.216870 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:35.300662 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:35.300717 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:35.344627 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:35.344661 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.803063 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.804202 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.962287 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.460360 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.344876 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.345734 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:37.899152 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:37.913091 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:37.913161 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:37.949684 2152780 cri.go:89] found id: ""
	I0819 13:23:37.949713 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.949722 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:37.949728 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:37.949791 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:37.984480 2152780 cri.go:89] found id: ""
	I0819 13:23:37.984509 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.984518 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:37.984524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:37.984591 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:38.022228 2152780 cri.go:89] found id: ""
	I0819 13:23:38.022264 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.022277 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:38.022285 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:38.022360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:38.056117 2152780 cri.go:89] found id: ""
	I0819 13:23:38.056147 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.056159 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:38.056184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:38.056252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:38.089157 2152780 cri.go:89] found id: ""
	I0819 13:23:38.089187 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.089204 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:38.089212 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:38.089279 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:38.123335 2152780 cri.go:89] found id: ""
	I0819 13:23:38.123369 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.123378 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:38.123384 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:38.123444 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:38.159555 2152780 cri.go:89] found id: ""
	I0819 13:23:38.159584 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.159596 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:38.159604 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:38.159671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:38.200728 2152780 cri.go:89] found id: ""
	I0819 13:23:38.200765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.200777 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:38.200789 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:38.200809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:38.251652 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:38.251684 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:38.264834 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:38.264862 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:38.341259 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:38.341287 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:38.341304 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:38.417381 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:38.417423 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:36.805360 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:39.303731 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.461458 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.962012 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.347922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.845195 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.959227 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:40.972845 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:40.972937 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:41.009180 2152780 cri.go:89] found id: ""
	I0819 13:23:41.009219 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.009233 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:41.009241 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:41.009313 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:41.044055 2152780 cri.go:89] found id: ""
	I0819 13:23:41.044088 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.044100 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:41.044109 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:41.044203 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:41.081073 2152780 cri.go:89] found id: ""
	I0819 13:23:41.081098 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.081108 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:41.081114 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:41.081166 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:41.113973 2152780 cri.go:89] found id: ""
	I0819 13:23:41.114011 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.114024 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:41.114031 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:41.114105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:41.149480 2152780 cri.go:89] found id: ""
	I0819 13:23:41.149515 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.149527 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:41.149539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:41.149632 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:41.183756 2152780 cri.go:89] found id: ""
	I0819 13:23:41.183788 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.183800 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:41.183808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:41.183884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:41.217629 2152780 cri.go:89] found id: ""
	I0819 13:23:41.217670 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.217683 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:41.217691 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:41.217755 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:41.254948 2152780 cri.go:89] found id: ""
	I0819 13:23:41.254987 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.255000 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:41.255011 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:41.255025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:41.293080 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:41.293116 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.349391 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:41.349437 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:41.363720 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:41.363751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:41.459273 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:41.459299 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:41.459315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.043239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:44.057981 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:44.058072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:44.091216 2152780 cri.go:89] found id: ""
	I0819 13:23:44.091247 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.091256 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:44.091263 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:44.091336 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:44.128729 2152780 cri.go:89] found id: ""
	I0819 13:23:44.128761 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.128772 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:44.128779 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:44.128865 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:44.167730 2152780 cri.go:89] found id: ""
	I0819 13:23:44.167765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.167777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:44.167784 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:44.167852 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:44.202323 2152780 cri.go:89] found id: ""
	I0819 13:23:44.202355 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.202366 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:44.202373 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:44.202451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:44.237030 2152780 cri.go:89] found id: ""
	I0819 13:23:44.237074 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.237088 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:44.237096 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:44.237164 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:44.273319 2152780 cri.go:89] found id: ""
	I0819 13:23:44.273356 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.273370 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:44.273378 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:44.273459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:44.313267 2152780 cri.go:89] found id: ""
	I0819 13:23:44.313300 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.313313 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:44.313322 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:44.313401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:44.353073 2152780 cri.go:89] found id: ""
	I0819 13:23:44.353099 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.353107 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:44.353115 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:44.353129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:44.366102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:44.366147 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:44.438580 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:44.438612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:44.438628 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.518200 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:44.518250 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:44.559362 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:44.559396 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.308823 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.804769 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:41.461933 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.961713 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.344606 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:45.344727 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.345821 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.114604 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:47.128989 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:47.129067 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:47.174286 2152780 cri.go:89] found id: ""
	I0819 13:23:47.174316 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.174324 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:47.174331 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:47.174391 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:47.208244 2152780 cri.go:89] found id: ""
	I0819 13:23:47.208282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.208291 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:47.208297 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:47.208360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:47.245609 2152780 cri.go:89] found id: ""
	I0819 13:23:47.245638 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.245645 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:47.245651 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:47.245703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:47.280617 2152780 cri.go:89] found id: ""
	I0819 13:23:47.280646 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.280655 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:47.280661 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:47.280726 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:47.317662 2152780 cri.go:89] found id: ""
	I0819 13:23:47.317695 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.317708 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:47.317719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:47.317789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:47.359113 2152780 cri.go:89] found id: ""
	I0819 13:23:47.359143 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.359152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:47.359158 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:47.359243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:47.396520 2152780 cri.go:89] found id: ""
	I0819 13:23:47.396550 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.396561 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:47.396568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:47.396636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:47.433043 2152780 cri.go:89] found id: ""
	I0819 13:23:47.433075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.433086 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:47.433097 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:47.433111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:47.491984 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:47.492023 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:47.505775 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:47.505805 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:47.573541 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:47.573568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:47.573584 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:47.657102 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:47.657148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.201855 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:50.216186 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:50.216250 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:50.254505 2152780 cri.go:89] found id: ""
	I0819 13:23:50.254542 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.254554 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:50.254561 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:50.254627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:50.288971 2152780 cri.go:89] found id: ""
	I0819 13:23:50.289006 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.289015 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:50.289021 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:50.289086 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:50.325243 2152780 cri.go:89] found id: ""
	I0819 13:23:50.325282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.325293 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:50.325302 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:50.325377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:50.361776 2152780 cri.go:89] found id: ""
	I0819 13:23:50.361810 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.361822 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:50.361830 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:50.361892 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:50.395380 2152780 cri.go:89] found id: ""
	I0819 13:23:50.395412 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.395422 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:50.395428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:50.395496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:50.429299 2152780 cri.go:89] found id: ""
	I0819 13:23:50.429336 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.429348 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:50.429356 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:50.429423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:50.471214 2152780 cri.go:89] found id: ""
	I0819 13:23:50.471236 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.471243 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:50.471249 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:50.471302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:50.510258 2152780 cri.go:89] found id: ""
	I0819 13:23:50.510298 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.510309 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:50.510322 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:50.510342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.547398 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:50.547431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:50.598425 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:50.598466 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:50.613308 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:50.613341 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:50.685803 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:50.685834 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:50.685851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:46.303155 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.305240 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.804319 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:46.461162 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.961356 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.961557 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:49.844368 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.345173 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:53.264007 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:53.277505 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:53.277596 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:53.320407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.320436 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.320444 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:53.320450 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:53.320508 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:53.364578 2152780 cri.go:89] found id: ""
	I0819 13:23:53.364628 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.364650 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:53.364660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:53.364732 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:53.404407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.404437 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.404446 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:53.404452 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:53.404522 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:53.440743 2152780 cri.go:89] found id: ""
	I0819 13:23:53.440775 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.440784 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:53.440791 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:53.440858 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:53.476151 2152780 cri.go:89] found id: ""
	I0819 13:23:53.476200 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.476211 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:53.476217 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:53.476297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:53.512531 2152780 cri.go:89] found id: ""
	I0819 13:23:53.512559 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.512567 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:53.512573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:53.512640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:53.562487 2152780 cri.go:89] found id: ""
	I0819 13:23:53.562524 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.562535 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:53.562543 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:53.562617 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:53.627507 2152780 cri.go:89] found id: ""
	I0819 13:23:53.627545 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.627556 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:53.627566 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:53.627587 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:53.686224 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:53.686270 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:53.700659 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:53.700693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:53.776506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:53.776544 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:53.776564 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:53.855375 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:53.855412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:53.305180 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.803826 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.962161 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:54.345531 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.845217 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.398105 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:56.411245 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:56.411312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:56.444097 2152780 cri.go:89] found id: ""
	I0819 13:23:56.444129 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.444138 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:56.444144 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:56.444214 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:56.480958 2152780 cri.go:89] found id: ""
	I0819 13:23:56.480993 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.481004 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:56.481013 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:56.481077 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:56.514657 2152780 cri.go:89] found id: ""
	I0819 13:23:56.514687 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.514696 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:56.514707 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:56.514762 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:56.548615 2152780 cri.go:89] found id: ""
	I0819 13:23:56.548647 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.548659 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:56.548667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:56.548735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:56.585801 2152780 cri.go:89] found id: ""
	I0819 13:23:56.585833 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.585844 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:56.585852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:56.585916 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:56.617804 2152780 cri.go:89] found id: ""
	I0819 13:23:56.617832 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.617843 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:56.617851 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:56.617919 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:56.655726 2152780 cri.go:89] found id: ""
	I0819 13:23:56.655753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.655761 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:56.655768 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:56.655820 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:56.691057 2152780 cri.go:89] found id: ""
	I0819 13:23:56.691087 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.691099 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:56.691111 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:56.691127 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:56.758836 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:56.758860 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:56.758879 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:56.836388 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:56.836427 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:56.875753 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:56.875781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:56.928126 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:56.928184 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.442721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:59.456581 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:59.456742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:59.492452 2152780 cri.go:89] found id: ""
	I0819 13:23:59.492480 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.492488 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:59.492494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:59.492544 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:59.527258 2152780 cri.go:89] found id: ""
	I0819 13:23:59.527291 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.527301 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:59.527311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:59.527366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:59.562712 2152780 cri.go:89] found id: ""
	I0819 13:23:59.562744 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.562757 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:59.562765 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:59.562836 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:59.597522 2152780 cri.go:89] found id: ""
	I0819 13:23:59.597557 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.597568 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:59.597577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:59.597649 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:59.638820 2152780 cri.go:89] found id: ""
	I0819 13:23:59.638852 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.638865 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:59.638873 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:59.638941 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:59.682623 2152780 cri.go:89] found id: ""
	I0819 13:23:59.682656 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.682667 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:59.682676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:59.682746 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:59.718731 2152780 cri.go:89] found id: ""
	I0819 13:23:59.718782 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.718795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:59.718808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:59.718881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:59.754799 2152780 cri.go:89] found id: ""
	I0819 13:23:59.754830 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.754840 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:59.754866 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:59.754890 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:59.807650 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:59.807683 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.821699 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:59.821727 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:59.901940 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:59.901962 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:59.901975 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:59.979019 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:59.979055 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:57.804198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:00.303807 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:57.960616 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.961190 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.344403 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:01.344800 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.519279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:02.532541 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:02.532606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:02.566287 2152780 cri.go:89] found id: ""
	I0819 13:24:02.566327 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.566338 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:02.566347 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:02.566423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:02.599874 2152780 cri.go:89] found id: ""
	I0819 13:24:02.599907 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.599920 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:02.599927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:02.600000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:02.633609 2152780 cri.go:89] found id: ""
	I0819 13:24:02.633642 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.633662 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:02.633671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:02.633741 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:02.669105 2152780 cri.go:89] found id: ""
	I0819 13:24:02.669133 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.669142 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:02.669151 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:02.669245 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:02.704719 2152780 cri.go:89] found id: ""
	I0819 13:24:02.704756 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.704769 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:02.704777 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:02.704846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:02.744694 2152780 cri.go:89] found id: ""
	I0819 13:24:02.744726 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.744736 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:02.744743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:02.744818 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.779911 2152780 cri.go:89] found id: ""
	I0819 13:24:02.779945 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.779959 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:02.779969 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:02.780034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:02.815235 2152780 cri.go:89] found id: ""
	I0819 13:24:02.815277 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.815293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:02.815303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:02.815319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:02.864822 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:02.864852 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:02.879626 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:02.879655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:02.951475 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:02.951504 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:02.951522 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:03.031154 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:03.031196 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.573907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:05.586737 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:05.586798 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:05.623912 2152780 cri.go:89] found id: ""
	I0819 13:24:05.623946 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.623957 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:05.623965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:05.624037 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:05.659399 2152780 cri.go:89] found id: ""
	I0819 13:24:05.659427 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.659435 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:05.659441 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:05.659494 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:05.693173 2152780 cri.go:89] found id: ""
	I0819 13:24:05.693207 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.693219 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:05.693228 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:05.693289 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:05.725641 2152780 cri.go:89] found id: ""
	I0819 13:24:05.725670 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.725680 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:05.725688 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:05.725752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:05.759559 2152780 cri.go:89] found id: ""
	I0819 13:24:05.759593 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.759606 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:05.759614 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:05.759686 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:05.793022 2152780 cri.go:89] found id: ""
	I0819 13:24:05.793051 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.793063 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:05.793071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:05.793147 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.304309 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.304491 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.461249 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.962079 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:03.344928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.845172 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.828997 2152780 cri.go:89] found id: ""
	I0819 13:24:05.829027 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.829037 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:05.829045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:05.829111 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:05.862852 2152780 cri.go:89] found id: ""
	I0819 13:24:05.862885 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.862895 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:05.862908 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:05.862924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.901295 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:05.901334 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:05.956106 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:05.956146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:05.971498 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:05.971525 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:06.040630 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:06.040653 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:06.040670 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:08.617393 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:08.631902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:08.631971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:08.669456 2152780 cri.go:89] found id: ""
	I0819 13:24:08.669512 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.669526 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:08.669539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:08.669630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:08.704825 2152780 cri.go:89] found id: ""
	I0819 13:24:08.704857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.704866 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:08.704874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:08.704933 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:08.738841 2152780 cri.go:89] found id: ""
	I0819 13:24:08.738879 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.738893 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:08.738902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:08.738961 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:08.775865 2152780 cri.go:89] found id: ""
	I0819 13:24:08.775899 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.775909 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:08.775918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:08.775981 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:08.813452 2152780 cri.go:89] found id: ""
	I0819 13:24:08.813482 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.813493 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:08.813501 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:08.813572 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:08.850638 2152780 cri.go:89] found id: ""
	I0819 13:24:08.850669 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.850681 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:08.850690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:08.850758 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:08.886750 2152780 cri.go:89] found id: ""
	I0819 13:24:08.886789 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.886801 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:08.886812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:08.886881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:08.922112 2152780 cri.go:89] found id: ""
	I0819 13:24:08.922145 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.922154 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:08.922164 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:08.922181 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:08.997014 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:08.997042 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:08.997059 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:09.074338 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:09.074388 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:09.114430 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:09.114460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:09.169581 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:09.169620 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:06.804761 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.809429 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:07.465846 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:09.961611 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.343646 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:10.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:12.844008 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.685453 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:11.698438 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:11.698509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:11.730814 2152780 cri.go:89] found id: ""
	I0819 13:24:11.730843 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.730852 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:11.730858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:11.730914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:11.768426 2152780 cri.go:89] found id: ""
	I0819 13:24:11.768460 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.768473 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:11.768483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:11.768555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:11.802718 2152780 cri.go:89] found id: ""
	I0819 13:24:11.802753 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.802765 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:11.802774 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:11.802841 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:11.838016 2152780 cri.go:89] found id: ""
	I0819 13:24:11.838058 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.838066 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:11.838073 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:11.838129 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:11.875869 2152780 cri.go:89] found id: ""
	I0819 13:24:11.875896 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.875904 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:11.875911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:11.875967 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:11.911538 2152780 cri.go:89] found id: ""
	I0819 13:24:11.911572 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.911580 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:11.911586 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:11.911640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:11.947762 2152780 cri.go:89] found id: ""
	I0819 13:24:11.947793 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.947803 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:11.947810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:11.947884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:11.989671 2152780 cri.go:89] found id: ""
	I0819 13:24:11.989706 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.989715 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:11.989726 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:11.989744 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:12.054707 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:12.054733 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:12.054749 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:12.137525 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:12.137572 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:12.175806 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:12.175834 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:12.227606 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:12.227651 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:14.743968 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:14.757178 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:14.757252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:14.791379 2152780 cri.go:89] found id: ""
	I0819 13:24:14.791409 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.791421 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:14.791430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:14.791510 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:14.826221 2152780 cri.go:89] found id: ""
	I0819 13:24:14.826250 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.826259 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:14.826265 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:14.826327 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:14.870239 2152780 cri.go:89] found id: ""
	I0819 13:24:14.870269 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.870281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:14.870345 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:14.870450 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:14.907473 2152780 cri.go:89] found id: ""
	I0819 13:24:14.907505 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.907515 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:14.907524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:14.907594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:14.942547 2152780 cri.go:89] found id: ""
	I0819 13:24:14.942585 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.942594 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:14.942600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:14.942658 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:14.982507 2152780 cri.go:89] found id: ""
	I0819 13:24:14.982536 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.982547 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:14.982555 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:14.982627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:15.020572 2152780 cri.go:89] found id: ""
	I0819 13:24:15.020604 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.020612 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:15.020619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:15.020691 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:15.057076 2152780 cri.go:89] found id: ""
	I0819 13:24:15.057105 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.057114 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:15.057123 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:15.057138 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:15.109111 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:15.109149 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:15.124049 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:15.124076 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:15.187413 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:15.187444 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:15.187460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:15.271009 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:15.271049 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:11.303722 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.804077 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:15.804497 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.961757 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.961949 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:14.844944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.344565 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.810205 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:17.823344 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:17.823417 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:17.862704 2152780 cri.go:89] found id: ""
	I0819 13:24:17.862734 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.862743 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:17.862750 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:17.862807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:17.900371 2152780 cri.go:89] found id: ""
	I0819 13:24:17.900404 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.900414 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:17.900420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:17.900476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:17.935518 2152780 cri.go:89] found id: ""
	I0819 13:24:17.935549 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.935559 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:17.935566 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:17.935622 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:17.972592 2152780 cri.go:89] found id: ""
	I0819 13:24:17.972624 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.972637 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:17.972647 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:17.972717 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:18.008636 2152780 cri.go:89] found id: ""
	I0819 13:24:18.008675 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.008686 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:18.008698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:18.008775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:18.040823 2152780 cri.go:89] found id: ""
	I0819 13:24:18.040857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.040866 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:18.040872 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:18.040934 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:18.079443 2152780 cri.go:89] found id: ""
	I0819 13:24:18.079472 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.079481 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:18.079488 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:18.079547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:18.118717 2152780 cri.go:89] found id: ""
	I0819 13:24:18.118746 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.118756 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:18.118766 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:18.118780 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:18.172588 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:18.172631 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:18.186427 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:18.186457 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:18.258534 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:18.258567 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:18.258585 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:18.335882 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:18.335922 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:17.804682 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.805544 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:16.461777 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:18.462005 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:20.960925 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.345299 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:21.843704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.844383 2152365 pod_ready.go:82] duration metric: took 4m0.006413479s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:22.844420 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 13:24:22.844431 2152365 pod_ready.go:39] duration metric: took 4m3.697261125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:22.844451 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:24:22.844496 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:22.844605 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:22.894620 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:22.894650 2152365 cri.go:89] found id: ""
	I0819 13:24:22.894661 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:22.894731 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.899300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:22.899365 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:22.934260 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:22.934285 2152365 cri.go:89] found id: ""
	I0819 13:24:22.934320 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:22.934380 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.939865 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:22.939927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:22.982433 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:22.982464 2152365 cri.go:89] found id: ""
	I0819 13:24:22.982475 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:22.982550 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.987038 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:22.987119 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:23.025231 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.025264 2152365 cri.go:89] found id: ""
	I0819 13:24:23.025275 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:23.025349 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.030567 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:23.030633 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:23.073663 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.073687 2152365 cri.go:89] found id: ""
	I0819 13:24:23.073695 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:23.073753 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.078563 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:23.078646 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:23.114597 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.114621 2152365 cri.go:89] found id: ""
	I0819 13:24:23.114629 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:23.114688 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.119022 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:23.119091 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:23.161263 2152365 cri.go:89] found id: ""
	I0819 13:24:23.161302 2152365 logs.go:276] 0 containers: []
	W0819 13:24:23.161311 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:23.161317 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:23.161373 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:23.211140 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.211167 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:23.211173 2152365 cri.go:89] found id: ""
	I0819 13:24:23.211182 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:23.211271 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.215875 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.220021 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:23.220046 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.283965 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:23.284004 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:20.878687 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:20.891965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:20.892042 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:20.926029 2152780 cri.go:89] found id: ""
	I0819 13:24:20.926057 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.926070 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:20.926077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:20.926142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:20.961025 2152780 cri.go:89] found id: ""
	I0819 13:24:20.961056 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.961067 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:20.961077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:20.961142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:20.993767 2152780 cri.go:89] found id: ""
	I0819 13:24:20.993800 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.993813 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:20.993821 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:20.993886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:21.025250 2152780 cri.go:89] found id: ""
	I0819 13:24:21.025285 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.025294 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:21.025301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:21.025356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:21.057832 2152780 cri.go:89] found id: ""
	I0819 13:24:21.057863 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.057871 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:21.057877 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:21.057930 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:21.092372 2152780 cri.go:89] found id: ""
	I0819 13:24:21.092400 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.092408 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:21.092419 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:21.092486 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:21.126138 2152780 cri.go:89] found id: ""
	I0819 13:24:21.126177 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.126187 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:21.126194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:21.126262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:21.157779 2152780 cri.go:89] found id: ""
	I0819 13:24:21.157806 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.157814 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:21.157823 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:21.157835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:21.240776 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:21.240813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:21.278705 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:21.278742 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:21.331086 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:21.331125 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:21.346399 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:21.346431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:21.424929 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:23.925325 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:23.938286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:23.938380 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:23.974038 2152780 cri.go:89] found id: ""
	I0819 13:24:23.974070 2152780 logs.go:276] 0 containers: []
	W0819 13:24:23.974079 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:23.974086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:23.974144 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:24.007828 2152780 cri.go:89] found id: ""
	I0819 13:24:24.007865 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.007876 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:24.007883 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:24.007951 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:24.042152 2152780 cri.go:89] found id: ""
	I0819 13:24:24.042182 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.042192 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:24.042198 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:24.042252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:24.076334 2152780 cri.go:89] found id: ""
	I0819 13:24:24.076366 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.076376 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:24.076382 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:24.076451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:24.116333 2152780 cri.go:89] found id: ""
	I0819 13:24:24.116368 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.116380 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:24.116388 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:24.116449 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:24.157126 2152780 cri.go:89] found id: ""
	I0819 13:24:24.157154 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.157163 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:24.157170 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:24.157224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:24.197124 2152780 cri.go:89] found id: ""
	I0819 13:24:24.197153 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.197161 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:24.197167 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:24.197224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:24.232662 2152780 cri.go:89] found id: ""
	I0819 13:24:24.232690 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.232698 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:24.232709 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:24.232721 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.310535 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.310573 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.351436 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:24.351469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:24.409022 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.409061 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.423392 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.423426 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:24.499638 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:22.305024 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:24.305228 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.961886 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:25.460596 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.000428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:27.016949 2152780 kubeadm.go:597] duration metric: took 4m2.883005064s to restartPrimaryControlPlane
	W0819 13:24:27.017029 2152780 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:27.017066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:27.478402 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:27.494351 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:24:27.505001 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:24:27.515444 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:24:27.515467 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:24:27.515523 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:24:27.524646 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:24:27.524697 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:24:27.535343 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:24:27.544402 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:24:27.544456 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:24:27.554655 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.564636 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:24:27.564709 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.575078 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:24:27.584856 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:24:27.584924 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:24:27.595099 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:24:27.675533 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:24:27.675638 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:24:27.827613 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:24:27.827770 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:24:27.827924 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:24:28.032127 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:24:23.323805 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:23.323834 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:23.400053 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:23.400107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:23.438155 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:23.438196 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.483737 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:23.483772 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:23.536000 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:23.536042 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.574211 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:23.574258 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.620141 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:23.620203 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.130515 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.130556 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.185924 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.185959 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.202306 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.202348 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:24.342635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:24.342675 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.895433 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:26.911884 2152365 api_server.go:72] duration metric: took 4m15.522363036s to wait for apiserver process to appear ...
	I0819 13:24:26.911914 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:24:26.911959 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:26.912026 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:26.949386 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.949423 2152365 cri.go:89] found id: ""
	I0819 13:24:26.949436 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:26.949509 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.954854 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:26.954937 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:26.993702 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:26.993732 2152365 cri.go:89] found id: ""
	I0819 13:24:26.993746 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:26.993806 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.997800 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:26.997873 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:27.044488 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:27.044521 2152365 cri.go:89] found id: ""
	I0819 13:24:27.044532 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:27.044612 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.049411 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:27.049488 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:27.098304 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.098328 2152365 cri.go:89] found id: ""
	I0819 13:24:27.098336 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:27.098388 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.102393 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:27.102448 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:27.138927 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.138952 2152365 cri.go:89] found id: ""
	I0819 13:24:27.138960 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:27.139019 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.143058 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:27.143113 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:27.183816 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:27.183846 2152365 cri.go:89] found id: ""
	I0819 13:24:27.183856 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:27.183918 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.188818 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:27.188880 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:27.231255 2152365 cri.go:89] found id: ""
	I0819 13:24:27.231286 2152365 logs.go:276] 0 containers: []
	W0819 13:24:27.231294 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:27.231300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:27.231352 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:27.265165 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.265189 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:27.265194 2152365 cri.go:89] found id: ""
	I0819 13:24:27.265204 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:27.265267 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.269294 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.273771 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:27.273796 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.315407 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:27.315446 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.359309 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:27.359343 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:27.817261 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:27.817316 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:27.834635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:27.834676 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:27.888137 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:27.888194 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:27.940367 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:27.940402 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.981404 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:27.981448 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:28.026666 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:28.026702 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:28.084387 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:28.084419 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:28.160615 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:28.160667 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:28.276547 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:28.276584 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:28.033928 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:24:28.034048 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:24:28.034129 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:24:28.034243 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:24:28.034320 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:24:28.034497 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:24:28.034852 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:24:28.035397 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:24:28.035736 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:24:28.036257 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:24:28.036850 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:24:28.037014 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:24:28.037102 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:24:28.386155 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:24:28.571457 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:24:28.835239 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:24:28.967421 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:24:28.990941 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:24:28.991082 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:24:28.991148 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:24:29.149979 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:24:29.151782 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:24:29.151943 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:24:29.161108 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:24:29.161209 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:24:29.161316 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:24:29.172129 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:24:26.804321 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.304110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.461339 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.462208 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:28.316651 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:28.316701 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:30.876679 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:24:30.882671 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:24:30.883817 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:24:30.883842 2152365 api_server.go:131] duration metric: took 3.971920414s to wait for apiserver health ...
	I0819 13:24:30.883850 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:24:30.883876 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:30.883927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:30.921225 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:30.921255 2152365 cri.go:89] found id: ""
	I0819 13:24:30.921266 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:30.921336 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.925961 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:30.926033 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:30.963834 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:30.963861 2152365 cri.go:89] found id: ""
	I0819 13:24:30.963872 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:30.963947 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.968020 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:30.968085 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:31.004690 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.004717 2152365 cri.go:89] found id: ""
	I0819 13:24:31.004726 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:31.004784 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.008770 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:31.008832 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:31.045412 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.045440 2152365 cri.go:89] found id: ""
	I0819 13:24:31.045451 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:31.045508 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.049682 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:31.049759 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:31.090853 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.090946 2152365 cri.go:89] found id: ""
	I0819 13:24:31.090958 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:31.091037 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.095245 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:31.095311 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:31.138661 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.138693 2152365 cri.go:89] found id: ""
	I0819 13:24:31.138705 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:31.138775 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.143216 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:31.143283 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:31.186554 2152365 cri.go:89] found id: ""
	I0819 13:24:31.186580 2152365 logs.go:276] 0 containers: []
	W0819 13:24:31.186589 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:31.186595 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:31.186645 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:31.226218 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.226257 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:31.226261 2152365 cri.go:89] found id: ""
	I0819 13:24:31.226270 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:31.226333 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.230602 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.234269 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:31.234290 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:31.299071 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:31.299107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.336594 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:31.336633 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:31.745390 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:31.745441 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:31.797446 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:31.797479 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.838882 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:31.838916 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.876830 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:31.876871 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.935426 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:31.935461 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.975182 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:31.975212 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:31.989631 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:31.989663 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:32.093154 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:32.093190 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:32.142869 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:32.142906 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:32.179008 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:32.179043 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:34.726569 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:24:34.726604 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.726609 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.726613 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.726619 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.726622 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.726625 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.726633 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.726636 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.726644 2152365 system_pods.go:74] duration metric: took 3.84278789s to wait for pod list to return data ...
	I0819 13:24:34.726654 2152365 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:24:34.729710 2152365 default_sa.go:45] found service account: "default"
	I0819 13:24:34.729745 2152365 default_sa.go:55] duration metric: took 3.082291ms for default service account to be created ...
	I0819 13:24:34.729758 2152365 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:24:34.734698 2152365 system_pods.go:86] 8 kube-system pods found
	I0819 13:24:34.734734 2152365 system_pods.go:89] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.734744 2152365 system_pods.go:89] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.734752 2152365 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.734758 2152365 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.734765 2152365 system_pods.go:89] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.734768 2152365 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.734775 2152365 system_pods.go:89] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.734784 2152365 system_pods.go:89] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.734792 2152365 system_pods.go:126] duration metric: took 5.027633ms to wait for k8s-apps to be running ...
	I0819 13:24:34.734802 2152365 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:24:34.734863 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:34.756925 2152365 system_svc.go:56] duration metric: took 22.110614ms WaitForService to wait for kubelet
	I0819 13:24:34.756964 2152365 kubeadm.go:582] duration metric: took 4m23.367451416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:24:34.757007 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:24:34.760465 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:24:34.760497 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:24:34.760510 2152365 node_conditions.go:105] duration metric: took 3.498566ms to run NodePressure ...
	I0819 13:24:34.760524 2152365 start.go:241] waiting for startup goroutines ...
	I0819 13:24:34.760531 2152365 start.go:246] waiting for cluster config update ...
	I0819 13:24:34.760552 2152365 start.go:255] writing updated cluster config ...
	I0819 13:24:34.760888 2152365 ssh_runner.go:195] Run: rm -f paused
	I0819 13:24:34.814139 2152365 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:24:34.815906 2152365 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-038572" cluster and "default" namespace by default
	I0819 13:24:31.305059 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.804408 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.804797 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:31.462264 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.961404 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.961719 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.305696 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.804500 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.460369 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.462138 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.804768 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.804803 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.462412 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.961368 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:46.804973 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.304762 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:47.460607 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.461589 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.804815 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:54.304854 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.961120 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:53.962098 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:55.962219 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:56.804112 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:57.298511 2153523 pod_ready.go:82] duration metric: took 4m0.000785617s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:57.298548 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:24:57.298569 2153523 pod_ready.go:39] duration metric: took 4m6.439346081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:57.298603 2153523 kubeadm.go:597] duration metric: took 4m16.103432347s to restartPrimaryControlPlane
	W0819 13:24:57.298676 2153523 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:57.298707 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:58.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:00.961514 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:03.460885 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:05.461736 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:09.173035 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:25:09.173672 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:09.173913 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:07.962080 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:10.461273 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:14.174468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:14.174714 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:12.961725 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:15.461361 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:17.461731 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:19.961772 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:23.413286 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.114546859s)
	I0819 13:25:23.413389 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:23.440190 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:25:23.450707 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:25:23.462984 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:25:23.463003 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:25:23.463046 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:25:23.472381 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:25:23.472437 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:25:23.482090 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:25:23.491789 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:25:23.491845 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:25:23.501722 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.510879 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:25:23.510936 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.520464 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:25:23.529582 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:25:23.529642 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:25:23.539089 2153523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:25:23.585240 2153523 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:25:23.585319 2153523 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:25:23.712867 2153523 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:25:23.712994 2153523 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:25:23.713118 2153523 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:25:23.722519 2153523 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:25:23.724354 2153523 out.go:235]   - Generating certificates and keys ...
	I0819 13:25:23.724436 2153523 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:25:23.724517 2153523 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:25:23.724598 2153523 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:25:23.724648 2153523 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:25:23.724702 2153523 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:25:23.724745 2153523 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:25:23.724795 2153523 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:25:23.724847 2153523 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:25:23.724904 2153523 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:25:23.725051 2153523 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:25:23.725125 2153523 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:25:23.725238 2153523 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:25:23.779977 2153523 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:25:23.930648 2153523 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:25:24.124057 2153523 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:25:24.314884 2153523 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:25:24.494351 2153523 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:25:24.495024 2153523 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:25:24.497778 2153523 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:25:24.175179 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:24.175432 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:24.499543 2153523 out.go:235]   - Booting up control plane ...
	I0819 13:25:24.499671 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:25:24.499762 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:25:24.499864 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:25:24.520379 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:25:24.532777 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:25:24.532853 2153523 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:25:24.666433 2153523 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:25:24.666580 2153523 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:25:25.168265 2153523 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.973317ms
	I0819 13:25:25.168373 2153523 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:25:21.961823 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:24.462014 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:30.169290 2153523 kubeadm.go:310] [api-check] The API server is healthy after 5.001301539s
	I0819 13:25:30.182039 2153523 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:25:30.198499 2153523 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:25:30.224273 2153523 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:25:30.224513 2153523 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-823470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:25:30.236397 2153523 kubeadm.go:310] [bootstrap-token] Using token: uoaia1.lo1g4dbd1uwo2t3t
	I0819 13:25:30.237712 2153523 out.go:235]   - Configuring RBAC rules ...
	I0819 13:25:30.237846 2153523 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:25:30.245121 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:25:30.254416 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:25:30.257701 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:25:30.262104 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:25:30.266592 2153523 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:25:30.577306 2153523 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:25:26.961302 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:28.962130 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:31.051204 2153523 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:25:31.576457 2153523 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:25:31.577557 2153523 kubeadm.go:310] 
	I0819 13:25:31.577644 2153523 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:25:31.577653 2153523 kubeadm.go:310] 
	I0819 13:25:31.577728 2153523 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:25:31.577738 2153523 kubeadm.go:310] 
	I0819 13:25:31.577786 2153523 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:25:31.577872 2153523 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:25:31.577949 2153523 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:25:31.577959 2153523 kubeadm.go:310] 
	I0819 13:25:31.578041 2153523 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:25:31.578051 2153523 kubeadm.go:310] 
	I0819 13:25:31.578111 2153523 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:25:31.578120 2153523 kubeadm.go:310] 
	I0819 13:25:31.578174 2153523 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:25:31.578266 2153523 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:25:31.578354 2153523 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:25:31.578383 2153523 kubeadm.go:310] 
	I0819 13:25:31.578524 2153523 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:25:31.578624 2153523 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:25:31.578641 2153523 kubeadm.go:310] 
	I0819 13:25:31.578765 2153523 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.578886 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:25:31.578922 2153523 kubeadm.go:310] 	--control-plane 
	I0819 13:25:31.578931 2153523 kubeadm.go:310] 
	I0819 13:25:31.579045 2153523 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:25:31.579055 2153523 kubeadm.go:310] 
	I0819 13:25:31.579151 2153523 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.579277 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:25:31.580242 2153523 kubeadm.go:310] W0819 13:25:23.534803    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580611 2153523 kubeadm.go:310] W0819 13:25:23.535422    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580758 2153523 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:25:31.580793 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:25:31.580803 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:25:31.582322 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:25:31.583520 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:25:31.594967 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:25:31.613491 2153523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:25:31.613590 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-823470 minikube.k8s.io/updated_at=2024_08_19T13_25_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=embed-certs-823470 minikube.k8s.io/primary=true
	I0819 13:25:31.613591 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.644190 2153523 ops.go:34] apiserver oom_adj: -16
	I0819 13:25:31.824270 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.324994 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.824955 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.325186 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.824797 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.325222 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.824306 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.325248 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.825154 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.461327 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:33.461401 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:35.455588 2151869 pod_ready.go:82] duration metric: took 4m0.000741427s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	E0819 13:25:35.455630 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:25:35.455654 2151869 pod_ready.go:39] duration metric: took 4m14.499879559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:35.455689 2151869 kubeadm.go:597] duration metric: took 4m22.29129413s to restartPrimaryControlPlane
	W0819 13:25:35.455763 2151869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:25:35.455798 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:25:36.324463 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:36.464343 2153523 kubeadm.go:1113] duration metric: took 4.850819101s to wait for elevateKubeSystemPrivileges
	I0819 13:25:36.464399 2153523 kubeadm.go:394] duration metric: took 4m55.327792066s to StartCluster
	I0819 13:25:36.464441 2153523 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.464553 2153523 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:25:36.466407 2153523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.466694 2153523 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:25:36.466786 2153523 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:25:36.466883 2153523 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-823470"
	I0819 13:25:36.466903 2153523 addons.go:69] Setting default-storageclass=true in profile "embed-certs-823470"
	I0819 13:25:36.466908 2153523 addons.go:69] Setting metrics-server=true in profile "embed-certs-823470"
	I0819 13:25:36.466924 2153523 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-823470"
	W0819 13:25:36.466935 2153523 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:25:36.466943 2153523 addons.go:234] Setting addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:36.466949 2153523 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-823470"
	W0819 13:25:36.466961 2153523 addons.go:243] addon metrics-server should already be in state true
	I0819 13:25:36.466968 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:25:36.466987 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.466998 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.467388 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467421 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467422 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467424 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.468449 2153523 out.go:177] * Verifying Kubernetes components...
	I0819 13:25:36.469946 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:25:36.484726 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0819 13:25:36.484740 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I0819 13:25:36.484824 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0819 13:25:36.485224 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485280 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485303 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485822 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485841 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485824 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485881 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485956 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485982 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.486206 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486260 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486267 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486382 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.486787 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486803 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486819 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.486836 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.490147 2153523 addons.go:234] Setting addon default-storageclass=true in "embed-certs-823470"
	W0819 13:25:36.490169 2153523 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:25:36.490198 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.490599 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.490628 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.506245 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
	I0819 13:25:36.506573 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0819 13:25:36.506955 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507024 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507467 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507485 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507599 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507619 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507896 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.507899 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.508055 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.508657 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.508682 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.509546 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.511452 2153523 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:25:36.512660 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:25:36.512680 2153523 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:25:36.512702 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.515934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516401 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.516422 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516608 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.516810 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.517007 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.517160 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.517459 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0819 13:25:36.517826 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.518316 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.518339 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.518689 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.518859 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.520336 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.521729 2153523 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:25:36.522775 2153523 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.522793 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:25:36.522813 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.525839 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526467 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.526505 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526736 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.527191 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.527384 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.527540 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.527918 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0819 13:25:36.528354 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.528868 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.528890 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.529243 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.529418 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.530881 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.531125 2153523 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.531138 2153523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:25:36.531151 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.534168 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534579 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.534603 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534703 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.534855 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.534995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.535105 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.723562 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:25:36.740697 2153523 node_ready.go:35] waiting up to 6m0s for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749268 2153523 node_ready.go:49] node "embed-certs-823470" has status "Ready":"True"
	I0819 13:25:36.749294 2153523 node_ready.go:38] duration metric: took 8.560448ms for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749306 2153523 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:36.756208 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:36.829190 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.845456 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:25:36.845487 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:25:36.857828 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.928557 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:25:36.928590 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:25:36.979832 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:36.979864 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:25:37.086105 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:37.498852 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.498909 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498924 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499189 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499210 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499218 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499225 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499393 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499392 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499405 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499419 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499426 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499762 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499774 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499771 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499761 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499815 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499835 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510580 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.510601 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.510894 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.510922 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510924 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.721765 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.721791 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722119 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722143 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722171 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.722180 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722502 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722513 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722525 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722538 2153523 addons.go:475] Verifying addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:37.724029 2153523 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:25:37.725177 2153523 addons.go:510] duration metric: took 1.258399079s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:25:38.763132 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:40.765558 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:44.175962 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:44.176268 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:43.263762 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:45.765551 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:46.265160 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.265189 2153523 pod_ready.go:82] duration metric: took 9.50894907s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.265202 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270706 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.270733 2153523 pod_ready.go:82] duration metric: took 5.521974ms for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270745 2153523 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275061 2153523 pod_ready.go:93] pod "etcd-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.275095 2153523 pod_ready.go:82] duration metric: took 4.331701ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275106 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280251 2153523 pod_ready.go:93] pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.280269 2153523 pod_ready.go:82] duration metric: took 5.157209ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280278 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284437 2153523 pod_ready.go:93] pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.284456 2153523 pod_ready.go:82] duration metric: took 4.173028ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284464 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660549 2153523 pod_ready.go:93] pod "kube-proxy-cn9sl" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.660577 2153523 pod_ready.go:82] duration metric: took 376.106459ms for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660588 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060405 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:47.060432 2153523 pod_ready.go:82] duration metric: took 399.837833ms for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060440 2153523 pod_ready.go:39] duration metric: took 10.311121311s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:47.060458 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:25:47.060513 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:25:47.076832 2153523 api_server.go:72] duration metric: took 10.610087948s to wait for apiserver process to appear ...
	I0819 13:25:47.076869 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:25:47.076894 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:25:47.081037 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:25:47.081960 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:25:47.081984 2153523 api_server.go:131] duration metric: took 5.106291ms to wait for apiserver health ...
	I0819 13:25:47.081993 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:25:47.264787 2153523 system_pods.go:59] 9 kube-system pods found
	I0819 13:25:47.264817 2153523 system_pods.go:61] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.264822 2153523 system_pods.go:61] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.264826 2153523 system_pods.go:61] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.264830 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.264835 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.264839 2153523 system_pods.go:61] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.264842 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.264849 2153523 system_pods.go:61] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.264853 2153523 system_pods.go:61] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.264863 2153523 system_pods.go:74] duration metric: took 182.863608ms to wait for pod list to return data ...
	I0819 13:25:47.264870 2153523 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:25:47.461938 2153523 default_sa.go:45] found service account: "default"
	I0819 13:25:47.461968 2153523 default_sa.go:55] duration metric: took 197.091755ms for default service account to be created ...
	I0819 13:25:47.461978 2153523 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:25:47.664070 2153523 system_pods.go:86] 9 kube-system pods found
	I0819 13:25:47.664101 2153523 system_pods.go:89] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.664106 2153523 system_pods.go:89] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.664110 2153523 system_pods.go:89] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.664115 2153523 system_pods.go:89] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.664119 2153523 system_pods.go:89] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.664122 2153523 system_pods.go:89] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.664125 2153523 system_pods.go:89] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.664131 2153523 system_pods.go:89] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.664135 2153523 system_pods.go:89] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.664146 2153523 system_pods.go:126] duration metric: took 202.162032ms to wait for k8s-apps to be running ...
	I0819 13:25:47.664157 2153523 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:25:47.664259 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:47.680000 2153523 system_svc.go:56] duration metric: took 15.832426ms WaitForService to wait for kubelet
	I0819 13:25:47.680036 2153523 kubeadm.go:582] duration metric: took 11.213303025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:25:47.680063 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:25:47.861730 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:25:47.861762 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:25:47.861774 2153523 node_conditions.go:105] duration metric: took 181.706418ms to run NodePressure ...
	I0819 13:25:47.861786 2153523 start.go:241] waiting for startup goroutines ...
	I0819 13:25:47.861793 2153523 start.go:246] waiting for cluster config update ...
	I0819 13:25:47.861804 2153523 start.go:255] writing updated cluster config ...
	I0819 13:25:47.862066 2153523 ssh_runner.go:195] Run: rm -f paused
	I0819 13:25:47.913161 2153523 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:25:47.915029 2153523 out.go:177] * Done! kubectl is now configured to use "embed-certs-823470" cluster and "default" namespace by default
	I0819 13:26:01.591066 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.135241188s)
	I0819 13:26:01.591140 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:01.620641 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:26:01.633612 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:01.647057 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:01.647085 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:01.647144 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:01.663839 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:01.663903 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:01.681570 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:01.692967 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:01.693038 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:01.714743 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.723974 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:01.724044 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.734044 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:01.743276 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:01.743326 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:01.752586 2151869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:01.798481 2151869 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:26:01.798564 2151869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:01.909646 2151869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:01.909801 2151869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:01.909923 2151869 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:26:01.923525 2151869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:01.925556 2151869 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:01.925670 2151869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:01.925760 2151869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:01.925900 2151869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:01.925991 2151869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:01.926102 2151869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:01.926162 2151869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:01.926248 2151869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:01.926334 2151869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:01.926449 2151869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:01.926567 2151869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:01.926632 2151869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:01.926707 2151869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:02.049009 2151869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:02.310249 2151869 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:26:02.425638 2151869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:02.664112 2151869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:03.165747 2151869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:03.166413 2151869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:03.169075 2151869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:03.170768 2151869 out.go:235]   - Booting up control plane ...
	I0819 13:26:03.170893 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:03.171003 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:03.171184 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:03.190081 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:03.197988 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:03.198058 2151869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:03.327052 2151869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:26:03.327180 2151869 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:26:04.332623 2151869 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002268296s
	I0819 13:26:04.332744 2151869 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:26:09.332759 2151869 kubeadm.go:310] [api-check] The API server is healthy after 5.001673024s
	I0819 13:26:09.344645 2151869 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:26:09.359806 2151869 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:26:09.393264 2151869 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:26:09.393499 2151869 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-441627 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:26:09.413036 2151869 kubeadm.go:310] [bootstrap-token] Using token: cva6zd.b3089a5fajgqfd6r
	I0819 13:26:09.414921 2151869 out.go:235]   - Configuring RBAC rules ...
	I0819 13:26:09.415128 2151869 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:26:09.420470 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:26:09.431168 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:26:09.438188 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:26:09.443406 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:26:09.450497 2151869 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:26:09.737815 2151869 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:26:10.165062 2151869 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:26:10.738017 2151869 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:26:10.739202 2151869 kubeadm.go:310] 
	I0819 13:26:10.739309 2151869 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:26:10.739321 2151869 kubeadm.go:310] 
	I0819 13:26:10.739415 2151869 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:26:10.739426 2151869 kubeadm.go:310] 
	I0819 13:26:10.739469 2151869 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:26:10.739558 2151869 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:26:10.739645 2151869 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:26:10.739662 2151869 kubeadm.go:310] 
	I0819 13:26:10.739746 2151869 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:26:10.739758 2151869 kubeadm.go:310] 
	I0819 13:26:10.739833 2151869 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:26:10.739842 2151869 kubeadm.go:310] 
	I0819 13:26:10.739926 2151869 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:26:10.740039 2151869 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:26:10.740116 2151869 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:26:10.740134 2151869 kubeadm.go:310] 
	I0819 13:26:10.740272 2151869 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:26:10.740376 2151869 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:26:10.740387 2151869 kubeadm.go:310] 
	I0819 13:26:10.740487 2151869 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.740651 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:26:10.740689 2151869 kubeadm.go:310] 	--control-plane 
	I0819 13:26:10.740699 2151869 kubeadm.go:310] 
	I0819 13:26:10.740808 2151869 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:26:10.740822 2151869 kubeadm.go:310] 
	I0819 13:26:10.740931 2151869 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.741086 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:26:10.743032 2151869 kubeadm.go:310] W0819 13:26:01.774566    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743353 2151869 kubeadm.go:310] W0819 13:26:01.775291    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743487 2151869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:10.743516 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:26:10.743530 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:26:10.745175 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:26:10.746234 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:26:10.756423 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:26:10.774266 2151869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:26:10.774332 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:10.774349 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-441627 minikube.k8s.io/updated_at=2024_08_19T13_26_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=no-preload-441627 minikube.k8s.io/primary=true
	I0819 13:26:10.800516 2151869 ops.go:34] apiserver oom_adj: -16
	I0819 13:26:10.991369 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.492114 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.992301 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.491433 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.991812 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.492015 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.991704 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.492129 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.992304 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:15.079535 2151869 kubeadm.go:1113] duration metric: took 4.305270284s to wait for elevateKubeSystemPrivileges
	I0819 13:26:15.079607 2151869 kubeadm.go:394] duration metric: took 5m1.9669805s to StartCluster
	I0819 13:26:15.079635 2151869 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.079759 2151869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:26:15.081957 2151869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.082240 2151869 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:26:15.082282 2151869 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:26:15.082374 2151869 addons.go:69] Setting storage-provisioner=true in profile "no-preload-441627"
	I0819 13:26:15.082413 2151869 addons.go:234] Setting addon storage-provisioner=true in "no-preload-441627"
	I0819 13:26:15.082409 2151869 addons.go:69] Setting metrics-server=true in profile "no-preload-441627"
	W0819 13:26:15.082425 2151869 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:26:15.082389 2151869 addons.go:69] Setting default-storageclass=true in profile "no-preload-441627"
	I0819 13:26:15.082449 2151869 addons.go:234] Setting addon metrics-server=true in "no-preload-441627"
	I0819 13:26:15.082461 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	W0819 13:26:15.082463 2151869 addons.go:243] addon metrics-server should already be in state true
	I0819 13:26:15.082479 2151869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-441627"
	I0819 13:26:15.082503 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.082514 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082869 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082920 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082925 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082947 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.083691 2151869 out.go:177] * Verifying Kubernetes components...
	I0819 13:26:15.085493 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:26:15.101376 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
	I0819 13:26:15.102091 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.102685 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.102703 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.103169 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.103181 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0819 13:26:15.103193 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0819 13:26:15.103361 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.103663 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.103732 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.104300 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104322 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.104710 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104730 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.105081 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105409 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105905 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.105938 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.106084 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.106119 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.107248 2151869 addons.go:234] Setting addon default-storageclass=true in "no-preload-441627"
	W0819 13:26:15.107271 2151869 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:26:15.107302 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.107654 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.107713 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.122709 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0819 13:26:15.123128 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.123869 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.123898 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.124416 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.124634 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.126217 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I0819 13:26:15.126791 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.127400 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.127419 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.127698 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.128022 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.128336 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.129232 2151869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:26:15.130058 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.130226 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:26:15.130243 2151869 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:26:15.130262 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.131335 2151869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:26:15.131652 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45447
	I0819 13:26:15.132227 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.132570 2151869 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.132586 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:26:15.132605 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.133517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134218 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.134244 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134600 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.134738 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.134755 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.134852 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.135030 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.135097 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.135284 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.135673 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.135702 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.136748 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137177 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.137197 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137442 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.137613 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.137765 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.137924 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.154937 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0819 13:26:15.155407 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.155835 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.155847 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.156238 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.156416 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.158575 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.158795 2151869 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.158805 2151869 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:26:15.158818 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.162078 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162519 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.162534 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162825 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.162977 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.163114 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.163198 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.309062 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:26:15.341525 2151869 node_ready.go:35] waiting up to 6m0s for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354270 2151869 node_ready.go:49] node "no-preload-441627" has status "Ready":"True"
	I0819 13:26:15.354307 2151869 node_ready.go:38] duration metric: took 12.742932ms for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354320 2151869 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:15.363385 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:15.427138 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.429228 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.517567 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:26:15.517597 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:26:15.626356 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:26:15.626393 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:26:15.677045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677072 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677414 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677471 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.677483 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677490 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677729 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677749 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.683924 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.683943 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.684224 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.684249 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.684252 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:15.701949 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:15.701980 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:26:15.743920 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:16.110007 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110035 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110393 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110415 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.110426 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110439 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110691 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110712 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426076 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426617 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.426639 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426650 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426660 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426669 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426948 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426994 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.427011 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.427024 2151869 addons.go:475] Verifying addon metrics-server=true in "no-preload-441627"
	I0819 13:26:16.428916 2151869 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 13:26:16.430020 2151869 addons.go:510] duration metric: took 1.347734507s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 13:26:17.369079 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:19.369662 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:21.369721 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:23.368863 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.368886 2151869 pod_ready.go:82] duration metric: took 8.005467546s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.368895 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373851 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.373873 2151869 pod_ready.go:82] duration metric: took 4.971429ms for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373884 2151869 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378381 2151869 pod_ready.go:93] pod "etcd-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.378405 2151869 pod_ready.go:82] duration metric: took 4.513601ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378417 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382700 2151869 pod_ready.go:93] pod "kube-apiserver-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.382719 2151869 pod_ready.go:82] duration metric: took 4.295583ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382728 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387145 2151869 pod_ready.go:93] pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.387168 2151869 pod_ready.go:82] duration metric: took 4.434097ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387179 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768694 2151869 pod_ready.go:93] pod "kube-proxy-9jhss" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.768722 2151869 pod_ready.go:82] duration metric: took 381.534902ms for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768735 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167861 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:24.167888 2151869 pod_ready.go:82] duration metric: took 399.144976ms for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167896 2151869 pod_ready.go:39] duration metric: took 8.81356327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:24.167913 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:26:24.167967 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:26:24.187913 2151869 api_server.go:72] duration metric: took 9.105632249s to wait for apiserver process to appear ...
	I0819 13:26:24.187938 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:26:24.187960 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:26:24.192758 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:26:24.193797 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:26:24.193822 2151869 api_server.go:131] duration metric: took 5.87693ms to wait for apiserver health ...
	I0819 13:26:24.193833 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:26:24.371261 2151869 system_pods.go:59] 9 kube-system pods found
	I0819 13:26:24.371291 2151869 system_pods.go:61] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.371296 2151869 system_pods.go:61] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.371300 2151869 system_pods.go:61] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.371304 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.371308 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.371312 2151869 system_pods.go:61] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.371317 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.371325 2151869 system_pods.go:61] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.371331 2151869 system_pods.go:61] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.371342 2151869 system_pods.go:74] duration metric: took 177.503046ms to wait for pod list to return data ...
	I0819 13:26:24.371355 2151869 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:26:24.568039 2151869 default_sa.go:45] found service account: "default"
	I0819 13:26:24.568068 2151869 default_sa.go:55] duration metric: took 196.702869ms for default service account to be created ...
	I0819 13:26:24.568078 2151869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:26:24.771705 2151869 system_pods.go:86] 9 kube-system pods found
	I0819 13:26:24.771734 2151869 system_pods.go:89] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.771741 2151869 system_pods.go:89] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.771745 2151869 system_pods.go:89] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.771749 2151869 system_pods.go:89] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.771753 2151869 system_pods.go:89] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.771756 2151869 system_pods.go:89] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.771759 2151869 system_pods.go:89] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.771765 2151869 system_pods.go:89] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.771770 2151869 system_pods.go:89] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.771781 2151869 system_pods.go:126] duration metric: took 203.697331ms to wait for k8s-apps to be running ...
	I0819 13:26:24.771794 2151869 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:26:24.771837 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.788088 2151869 system_svc.go:56] duration metric: took 16.285619ms WaitForService to wait for kubelet
	I0819 13:26:24.788122 2151869 kubeadm.go:582] duration metric: took 9.705841529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:26:24.788157 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:26:24.968341 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:26:24.968377 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:26:24.968392 2151869 node_conditions.go:105] duration metric: took 180.211418ms to run NodePressure ...
	I0819 13:26:24.968409 2151869 start.go:241] waiting for startup goroutines ...
	I0819 13:26:24.968418 2151869 start.go:246] waiting for cluster config update ...
	I0819 13:26:24.968429 2151869 start.go:255] writing updated cluster config ...
	I0819 13:26:24.968747 2151869 ssh_runner.go:195] Run: rm -f paused
	I0819 13:26:25.025595 2151869 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:26:25.027536 2151869 out.go:177] * Done! kubectl is now configured to use "no-preload-441627" cluster and "default" namespace by default
	I0819 13:26:24.177907 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:26:24.178201 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:26:24.178214 2152780 kubeadm.go:310] 
	I0819 13:26:24.178291 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:26:24.178372 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:26:24.178382 2152780 kubeadm.go:310] 
	I0819 13:26:24.178424 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:26:24.178476 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:26:24.178656 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:26:24.178674 2152780 kubeadm.go:310] 
	I0819 13:26:24.178792 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:26:24.178834 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:26:24.178879 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:26:24.178885 2152780 kubeadm.go:310] 
	I0819 13:26:24.179039 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:26:24.179150 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:26:24.179160 2152780 kubeadm.go:310] 
	I0819 13:26:24.179343 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:26:24.179495 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:26:24.179612 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:26:24.179737 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:26:24.179753 2152780 kubeadm.go:310] 
	I0819 13:26:24.180394 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:24.180518 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:26:24.180601 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 13:26:24.180756 2152780 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 13:26:24.180807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:26:24.684112 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.697957 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:24.707579 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:24.707600 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:24.707649 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:24.717570 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:24.717642 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:24.727069 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:24.735692 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:24.735757 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:24.744522 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.753036 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:24.753090 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.763455 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:24.772780 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:24.772839 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:24.782898 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:24.856625 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:26:24.856732 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:25.003965 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:25.004118 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:25.004294 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:26:25.205556 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:25.207347 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:25.207462 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:25.207553 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:25.207671 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:25.207759 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:25.207840 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:25.207910 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:25.207978 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:25.208063 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:25.208526 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:25.209357 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:25.209668 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:25.209816 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:25.424000 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:25.667776 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:25.723895 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:25.817349 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:25.834327 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:25.836860 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:25.837016 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:25.992443 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:25.994085 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:26:25.994223 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:26.007880 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:26.009553 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:26.013310 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:26.014264 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:27:06.016907 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:27:06.017042 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:06.017274 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:11.017468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:11.017758 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:21.018328 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:21.018585 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:41.019394 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:41.019632 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018489 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:28:21.018784 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018807 2152780 kubeadm.go:310] 
	I0819 13:28:21.018879 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:28:21.018963 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:28:21.018971 2152780 kubeadm.go:310] 
	I0819 13:28:21.019000 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:28:21.019030 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:28:21.019122 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:28:21.019130 2152780 kubeadm.go:310] 
	I0819 13:28:21.019233 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:28:21.019276 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:28:21.019329 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:28:21.019337 2152780 kubeadm.go:310] 
	I0819 13:28:21.019471 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:28:21.019607 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:28:21.019620 2152780 kubeadm.go:310] 
	I0819 13:28:21.019776 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:28:21.019895 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:28:21.020016 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:28:21.020125 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:28:21.020139 2152780 kubeadm.go:310] 
	I0819 13:28:21.020943 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:28:21.021058 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:28:21.021158 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 13:28:21.021292 2152780 kubeadm.go:394] duration metric: took 7m56.942541209s to StartCluster
	I0819 13:28:21.021375 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:28:21.021456 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:28:21.068572 2152780 cri.go:89] found id: ""
	I0819 13:28:21.068613 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.068624 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:28:21.068633 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:28:21.068704 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:28:21.108484 2152780 cri.go:89] found id: ""
	I0819 13:28:21.108514 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.108532 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:28:21.108540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:28:21.108611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:28:21.151863 2152780 cri.go:89] found id: ""
	I0819 13:28:21.151894 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.151905 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:28:21.151914 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:28:21.151992 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:28:21.191844 2152780 cri.go:89] found id: ""
	I0819 13:28:21.191873 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.191883 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:28:21.191891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:28:21.191970 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:28:21.230545 2152780 cri.go:89] found id: ""
	I0819 13:28:21.230580 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.230588 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:28:21.230595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:28:21.230650 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:28:21.264157 2152780 cri.go:89] found id: ""
	I0819 13:28:21.264211 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.264225 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:28:21.264233 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:28:21.264301 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:28:21.297523 2152780 cri.go:89] found id: ""
	I0819 13:28:21.297558 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.297566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:28:21.297573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:28:21.297630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:28:21.336765 2152780 cri.go:89] found id: ""
	I0819 13:28:21.336793 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.336802 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:28:21.336815 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:28:21.336839 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:28:21.387668 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:28:21.387711 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:28:21.402131 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:28:21.402173 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:28:21.473587 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:28:21.473615 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:28:21.473632 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:28:21.576567 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:28:21.576610 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 13:28:21.614859 2152780 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 13:28:21.614919 2152780 out.go:270] * 
	W0819 13:28:21.615011 2152780 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.615032 2152780 out.go:270] * 
	W0819 13:28:21.615837 2152780 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:28:21.619494 2152780 out.go:201] 
	W0819 13:28:21.620743 2152780 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.620793 2152780 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 13:28:21.620813 2152780 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 13:28:21.622143 2152780 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.053133556Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074527053106776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6840a23-ae3a-4f12-aed0-f761cbfa45c0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.053901943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3bad225-737c-4ebe-953e-a75548fb0883 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.054022266Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3bad225-737c-4ebe-953e-a75548fb0883 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.055053402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e17eb18e8b9aeb57dcebc5d16ad66bc39719c176421926f9db9c613758b6713e,PodSandboxId:ff68b19fa6ad66156e584aff98d252301ae07c98e3698a0fe8a2df77384bd2d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073977016804631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mvn98,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34ae24b-14c2-4b5a-8936-fd1785e1e26f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9192ed2c605782013ccb50760b7ce2ea0f7e1f383127ffc1e344d47164ab080b,PodSandboxId:ae2af585e8ee52fc20c26364596550ea6107cd908de719ec7a2574655167b162,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073976975187054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p6zhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 74e6b340-a400-4672-a8c2-3e3ffe39019f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d764ddaa462f1d9020775d70fc5bc25f09ab09d3cf970fc1629194186cfd7b8,PodSandboxId:3cfeee60ad41511864ea0110dfb2207112953e5bc6796e59508a75bf6a6ba607,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1724073976795083767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ff9d6-b00d-4e31-8f05-60454adbcf44,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b6c7c210803766c2b2e530eb1bbec4ddb5384f6270f6ad7dcb0c182604e806,PodSandboxId:023cc108d67fa33c3fdc7162de36f3063275f942163fe8af9c0a8e60d18d0739,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724073975632087585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9jhss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de8a2831-619c-4651-bb20-b077271795e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b4e2ac023acb57b647db26868a7ae3f222214bff0091ee55abbce19f2cc80a,PodSandboxId:e8e4fb056925aed31308d762e7bc9f5f0ecc305988f216ed457ab30a425f2969,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073964533905644,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf8d24aca618195031c14e2b36a221e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68a14670fea88faa8a82674c3840398c98df06f827790d16ad926599f32c72f4,PodSandboxId:4ee4ace14f1d9b101714d7f5e42341e9fef619c61ca6364aa0838f9e9f79fab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073964513765611,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338e5c7cfc6fc69155ee118d0b3e730f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fc542904b35b2fc6294b3fff14a1c5490551eeadbda4da4e44ceccff41fc36,PodSandboxId:a4ba0bbd95bd890f50f33598ac388c2584ee2219e843b8143053a55ebba99aea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073964540190542,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bd63b37df0ef774f6078fbfa346463,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17b2b27284087c02c9046725e1a9e5b80e950f41aeedaa7abd16eba4c8382c,PodSandboxId:47cbb5662d6177d7b2483ba1ec7dca65015f0eb589f2dc1e8d689eaf152ce2f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073964472863183,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a5df38b0f875df0cf987689dc71f475e15b26a5616a33c181dbeb6328f0f5d7,PodSandboxId:5484cc94ffcba8ac891d9c148047ba3453842837d6a79beb2f917f91967b6f4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073675615273001,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3bad225-737c-4ebe-953e-a75548fb0883 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.096264432Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=83fa8434-ef34-457c-a5fb-30cf1aa22d24 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.096416550Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=83fa8434-ef34-457c-a5fb-30cf1aa22d24 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.098310405Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=061e36c1-d9f4-4316-951c-3b3e85ff5b3f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.098669621Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074527098649033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=061e36c1-d9f4-4316-951c-3b3e85ff5b3f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.099131424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec2eaca4-5a9c-4da7-bca5-f94b7259317e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.099182345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec2eaca4-5a9c-4da7-bca5-f94b7259317e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.099413656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e17eb18e8b9aeb57dcebc5d16ad66bc39719c176421926f9db9c613758b6713e,PodSandboxId:ff68b19fa6ad66156e584aff98d252301ae07c98e3698a0fe8a2df77384bd2d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073977016804631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mvn98,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34ae24b-14c2-4b5a-8936-fd1785e1e26f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9192ed2c605782013ccb50760b7ce2ea0f7e1f383127ffc1e344d47164ab080b,PodSandboxId:ae2af585e8ee52fc20c26364596550ea6107cd908de719ec7a2574655167b162,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073976975187054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p6zhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 74e6b340-a400-4672-a8c2-3e3ffe39019f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d764ddaa462f1d9020775d70fc5bc25f09ab09d3cf970fc1629194186cfd7b8,PodSandboxId:3cfeee60ad41511864ea0110dfb2207112953e5bc6796e59508a75bf6a6ba607,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1724073976795083767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ff9d6-b00d-4e31-8f05-60454adbcf44,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b6c7c210803766c2b2e530eb1bbec4ddb5384f6270f6ad7dcb0c182604e806,PodSandboxId:023cc108d67fa33c3fdc7162de36f3063275f942163fe8af9c0a8e60d18d0739,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724073975632087585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9jhss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de8a2831-619c-4651-bb20-b077271795e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b4e2ac023acb57b647db26868a7ae3f222214bff0091ee55abbce19f2cc80a,PodSandboxId:e8e4fb056925aed31308d762e7bc9f5f0ecc305988f216ed457ab30a425f2969,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073964533905644,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf8d24aca618195031c14e2b36a221e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68a14670fea88faa8a82674c3840398c98df06f827790d16ad926599f32c72f4,PodSandboxId:4ee4ace14f1d9b101714d7f5e42341e9fef619c61ca6364aa0838f9e9f79fab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073964513765611,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338e5c7cfc6fc69155ee118d0b3e730f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fc542904b35b2fc6294b3fff14a1c5490551eeadbda4da4e44ceccff41fc36,PodSandboxId:a4ba0bbd95bd890f50f33598ac388c2584ee2219e843b8143053a55ebba99aea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073964540190542,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bd63b37df0ef774f6078fbfa346463,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17b2b27284087c02c9046725e1a9e5b80e950f41aeedaa7abd16eba4c8382c,PodSandboxId:47cbb5662d6177d7b2483ba1ec7dca65015f0eb589f2dc1e8d689eaf152ce2f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073964472863183,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a5df38b0f875df0cf987689dc71f475e15b26a5616a33c181dbeb6328f0f5d7,PodSandboxId:5484cc94ffcba8ac891d9c148047ba3453842837d6a79beb2f917f91967b6f4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073675615273001,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec2eaca4-5a9c-4da7-bca5-f94b7259317e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.141472270Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=282f893d-10d6-4cd1-a4b3-55505f9028bd name=/runtime.v1.RuntimeService/Version
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.141547658Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=282f893d-10d6-4cd1-a4b3-55505f9028bd name=/runtime.v1.RuntimeService/Version
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.142793153Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d33d08d3-817e-489a-a895-3a7082c7516f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.143632937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074527143606822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d33d08d3-817e-489a-a895-3a7082c7516f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.144836504Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e7b6604-9efe-44fa-a5a1-5a36c07cb4bc name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.144904114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e7b6604-9efe-44fa-a5a1-5a36c07cb4bc name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.145118306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e17eb18e8b9aeb57dcebc5d16ad66bc39719c176421926f9db9c613758b6713e,PodSandboxId:ff68b19fa6ad66156e584aff98d252301ae07c98e3698a0fe8a2df77384bd2d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073977016804631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mvn98,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34ae24b-14c2-4b5a-8936-fd1785e1e26f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9192ed2c605782013ccb50760b7ce2ea0f7e1f383127ffc1e344d47164ab080b,PodSandboxId:ae2af585e8ee52fc20c26364596550ea6107cd908de719ec7a2574655167b162,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073976975187054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p6zhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 74e6b340-a400-4672-a8c2-3e3ffe39019f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d764ddaa462f1d9020775d70fc5bc25f09ab09d3cf970fc1629194186cfd7b8,PodSandboxId:3cfeee60ad41511864ea0110dfb2207112953e5bc6796e59508a75bf6a6ba607,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1724073976795083767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ff9d6-b00d-4e31-8f05-60454adbcf44,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b6c7c210803766c2b2e530eb1bbec4ddb5384f6270f6ad7dcb0c182604e806,PodSandboxId:023cc108d67fa33c3fdc7162de36f3063275f942163fe8af9c0a8e60d18d0739,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724073975632087585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9jhss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de8a2831-619c-4651-bb20-b077271795e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b4e2ac023acb57b647db26868a7ae3f222214bff0091ee55abbce19f2cc80a,PodSandboxId:e8e4fb056925aed31308d762e7bc9f5f0ecc305988f216ed457ab30a425f2969,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073964533905644,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf8d24aca618195031c14e2b36a221e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68a14670fea88faa8a82674c3840398c98df06f827790d16ad926599f32c72f4,PodSandboxId:4ee4ace14f1d9b101714d7f5e42341e9fef619c61ca6364aa0838f9e9f79fab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073964513765611,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338e5c7cfc6fc69155ee118d0b3e730f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fc542904b35b2fc6294b3fff14a1c5490551eeadbda4da4e44ceccff41fc36,PodSandboxId:a4ba0bbd95bd890f50f33598ac388c2584ee2219e843b8143053a55ebba99aea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073964540190542,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bd63b37df0ef774f6078fbfa346463,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17b2b27284087c02c9046725e1a9e5b80e950f41aeedaa7abd16eba4c8382c,PodSandboxId:47cbb5662d6177d7b2483ba1ec7dca65015f0eb589f2dc1e8d689eaf152ce2f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073964472863183,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a5df38b0f875df0cf987689dc71f475e15b26a5616a33c181dbeb6328f0f5d7,PodSandboxId:5484cc94ffcba8ac891d9c148047ba3453842837d6a79beb2f917f91967b6f4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073675615273001,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e7b6604-9efe-44fa-a5a1-5a36c07cb4bc name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.179676461Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd4957ee-6882-442b-ae95-5e38663f441f name=/runtime.v1.RuntimeService/Version
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.179983606Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd4957ee-6882-442b-ae95-5e38663f441f name=/runtime.v1.RuntimeService/Version
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.181798103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1efe8d4c-60dd-49ea-bd12-f9f6cf8e90c2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.182159516Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074527182138837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1efe8d4c-60dd-49ea-bd12-f9f6cf8e90c2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.182724913Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21692b50-cd30-49d1-aee8-2209d0ff6be4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.182798234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21692b50-cd30-49d1-aee8-2209d0ff6be4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:35:27 no-preload-441627 crio[722]: time="2024-08-19 13:35:27.183154943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e17eb18e8b9aeb57dcebc5d16ad66bc39719c176421926f9db9c613758b6713e,PodSandboxId:ff68b19fa6ad66156e584aff98d252301ae07c98e3698a0fe8a2df77384bd2d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073977016804631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mvn98,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34ae24b-14c2-4b5a-8936-fd1785e1e26f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9192ed2c605782013ccb50760b7ce2ea0f7e1f383127ffc1e344d47164ab080b,PodSandboxId:ae2af585e8ee52fc20c26364596550ea6107cd908de719ec7a2574655167b162,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073976975187054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p6zhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 74e6b340-a400-4672-a8c2-3e3ffe39019f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d764ddaa462f1d9020775d70fc5bc25f09ab09d3cf970fc1629194186cfd7b8,PodSandboxId:3cfeee60ad41511864ea0110dfb2207112953e5bc6796e59508a75bf6a6ba607,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1724073976795083767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ff9d6-b00d-4e31-8f05-60454adbcf44,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b6c7c210803766c2b2e530eb1bbec4ddb5384f6270f6ad7dcb0c182604e806,PodSandboxId:023cc108d67fa33c3fdc7162de36f3063275f942163fe8af9c0a8e60d18d0739,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724073975632087585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9jhss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de8a2831-619c-4651-bb20-b077271795e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b4e2ac023acb57b647db26868a7ae3f222214bff0091ee55abbce19f2cc80a,PodSandboxId:e8e4fb056925aed31308d762e7bc9f5f0ecc305988f216ed457ab30a425f2969,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073964533905644,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf8d24aca618195031c14e2b36a221e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68a14670fea88faa8a82674c3840398c98df06f827790d16ad926599f32c72f4,PodSandboxId:4ee4ace14f1d9b101714d7f5e42341e9fef619c61ca6364aa0838f9e9f79fab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073964513765611,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338e5c7cfc6fc69155ee118d0b3e730f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fc542904b35b2fc6294b3fff14a1c5490551eeadbda4da4e44ceccff41fc36,PodSandboxId:a4ba0bbd95bd890f50f33598ac388c2584ee2219e843b8143053a55ebba99aea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073964540190542,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bd63b37df0ef774f6078fbfa346463,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17b2b27284087c02c9046725e1a9e5b80e950f41aeedaa7abd16eba4c8382c,PodSandboxId:47cbb5662d6177d7b2483ba1ec7dca65015f0eb589f2dc1e8d689eaf152ce2f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073964472863183,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a5df38b0f875df0cf987689dc71f475e15b26a5616a33c181dbeb6328f0f5d7,PodSandboxId:5484cc94ffcba8ac891d9c148047ba3453842837d6a79beb2f917f91967b6f4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073675615273001,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21692b50-cd30-49d1-aee8-2209d0ff6be4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e17eb18e8b9ae       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   ff68b19fa6ad6       coredns-6f6b679f8f-mvn98
	9192ed2c60578       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   ae2af585e8ee5       coredns-6f6b679f8f-p6zhv
	5d764ddaa462f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   3cfeee60ad415       storage-provisioner
	32b6c7c210803       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   023cc108d67fa       kube-proxy-9jhss
	a3fc542904b35       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   a4ba0bbd95bd8       kube-controller-manager-no-preload-441627
	b2b4e2ac023ac       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   e8e4fb056925a       kube-scheduler-no-preload-441627
	68a14670fea88       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   4ee4ace14f1d9       etcd-no-preload-441627
	9d17b2b272840       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   47cbb5662d617       kube-apiserver-no-preload-441627
	5a5df38b0f875       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   5484cc94ffcba       kube-apiserver-no-preload-441627
	
	
	==> coredns [9192ed2c605782013ccb50760b7ce2ea0f7e1f383127ffc1e344d47164ab080b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e17eb18e8b9aeb57dcebc5d16ad66bc39719c176421926f9db9c613758b6713e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-441627
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-441627
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=no-preload-441627
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T13_26_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 13:26:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-441627
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 13:35:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 13:31:26 +0000   Mon, 19 Aug 2024 13:26:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 13:31:26 +0000   Mon, 19 Aug 2024 13:26:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 13:31:26 +0000   Mon, 19 Aug 2024 13:26:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 13:31:26 +0000   Mon, 19 Aug 2024 13:26:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.161
	  Hostname:    no-preload-441627
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb2ad5a0a588440892626724dbf0c23e
	  System UUID:                cb2ad5a0-a588-4408-9262-6724dbf0c23e
	  Boot ID:                    ac99fccb-06be-4148-8d22-2e0b842e0c48
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-mvn98                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-6f6b679f8f-p6zhv                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-no-preload-441627                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-no-preload-441627             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-no-preload-441627    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-9jhss                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-no-preload-441627             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-rzp96              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m11s  kube-proxy       
	  Normal  Starting                 9m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node no-preload-441627 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node no-preload-441627 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node no-preload-441627 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s  node-controller  Node no-preload-441627 event: Registered Node no-preload-441627 in Controller
	
	
	==> dmesg <==
	[  +0.043989] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.045839] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.590373] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.371695] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.381353] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.072261] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064530] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.201399] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.124738] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.281680] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[Aug19 13:21] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.063486] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.149136] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +3.276619] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.121924] kauditd_printk_skb: 55 callbacks suppressed
	[ +28.136085] kauditd_printk_skb: 30 callbacks suppressed
	[Aug19 13:26] systemd-fstab-generator[3036]: Ignoring "noauto" option for root device
	[  +0.069449] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.486632] systemd-fstab-generator[3358]: Ignoring "noauto" option for root device
	[  +0.081829] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.303165] systemd-fstab-generator[3471]: Ignoring "noauto" option for root device
	[  +0.123314] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.693247] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [68a14670fea88faa8a82674c3840398c98df06f827790d16ad926599f32c72f4] <==
	{"level":"info","ts":"2024-08-19T13:26:05.112283Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.161:2380"}
	{"level":"info","ts":"2024-08-19T13:26:05.116470Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.161:2380"}
	{"level":"info","ts":"2024-08-19T13:26:05.112526Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"3fbdf04b5b0eb504","initial-advertise-peer-urls":["https://192.168.50.161:2380"],"listen-peer-urls":["https://192.168.50.161:2380"],"advertise-client-urls":["https://192.168.50.161:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.161:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T13:26:05.112543Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T13:26:05.112234Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9aa7cd058091608f","local-member-id":"3fbdf04b5b0eb504","added-peer-id":"3fbdf04b5b0eb504","added-peer-peer-urls":["https://192.168.50.161:2380"]}
	{"level":"info","ts":"2024-08-19T13:26:05.952582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-19T13:26:05.952641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-19T13:26:05.952669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 received MsgPreVoteResp from 3fbdf04b5b0eb504 at term 1"}
	{"level":"info","ts":"2024-08-19T13:26:05.952685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 became candidate at term 2"}
	{"level":"info","ts":"2024-08-19T13:26:05.952691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 received MsgVoteResp from 3fbdf04b5b0eb504 at term 2"}
	{"level":"info","ts":"2024-08-19T13:26:05.952700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 became leader at term 2"}
	{"level":"info","ts":"2024-08-19T13:26:05.952709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3fbdf04b5b0eb504 elected leader 3fbdf04b5b0eb504 at term 2"}
	{"level":"info","ts":"2024-08-19T13:26:05.954058Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3fbdf04b5b0eb504","local-member-attributes":"{Name:no-preload-441627 ClientURLs:[https://192.168.50.161:2379]}","request-path":"/0/members/3fbdf04b5b0eb504/attributes","cluster-id":"9aa7cd058091608f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T13:26:05.954113Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:26:05.954437Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:26:05.954502Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:26:05.955183Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:26:05.955654Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T13:26:05.955672Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T13:26:05.955705Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9aa7cd058091608f","local-member-id":"3fbdf04b5b0eb504","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:26:05.955753Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:26:05.955775Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:26:05.956118Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.161:2379"}
	{"level":"info","ts":"2024-08-19T13:26:05.956414Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:26:05.957213Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:35:27 up 14 min,  0 users,  load average: 0.15, 0.17, 0.10
	Linux no-preload-441627 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5a5df38b0f875df0cf987689dc71f475e15b26a5616a33c181dbeb6328f0f5d7] <==
	W0819 13:25:56.238675       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:56.267659       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:56.313757       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:56.315162       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:56.506769       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:56.534173       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:56.573409       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:56.631019       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:00.197975       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:00.326118       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:00.467529       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:00.524631       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:00.664786       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:00.990848       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.062188       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.121321       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.298322       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.317238       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.414658       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.419108       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.451023       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.475444       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.495936       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.529672       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.576700       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9d17b2b27284087c02c9046725e1a9e5b80e950f41aeedaa7abd16eba4c8382c] <==
	W0819 13:31:08.322445       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:31:08.322538       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 13:31:08.323717       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:31:08.323822       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 13:32:08.325077       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:32:08.325182       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 13:32:08.325295       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:32:08.325435       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 13:32:08.326414       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:32:08.326562       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 13:34:08.326886       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:34:08.326983       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 13:34:08.327074       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:34:08.327126       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 13:34:08.328109       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:34:08.328166       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a3fc542904b35b2fc6294b3fff14a1c5490551eeadbda4da4e44ceccff41fc36] <==
	E0819 13:30:14.317877       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:30:14.772168       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:30:44.325182       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:30:44.779782       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:31:14.332531       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:31:14.794385       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 13:31:26.503045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-441627"
	E0819 13:31:44.339429       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:31:44.804888       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:32:14.347140       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:32:14.814056       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 13:32:16.068939       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="230.349µs"
	I0819 13:32:27.065278       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="93.544µs"
	E0819 13:32:44.354046       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:32:44.821639       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:33:14.362470       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:33:14.836766       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:33:44.369105       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:33:44.844786       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:34:14.375950       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:34:14.853430       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:34:44.381797       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:34:44.864130       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:35:14.389255       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:35:14.879535       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [32b6c7c210803766c2b2e530eb1bbec4ddb5384f6270f6ad7dcb0c182604e806] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 13:26:15.998926       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 13:26:16.024086       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.161"]
	E0819 13:26:16.024386       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 13:26:16.216927       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 13:26:16.217011       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 13:26:16.217100       1 server_linux.go:169] "Using iptables Proxier"
	I0819 13:26:16.223019       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 13:26:16.223291       1 server.go:483] "Version info" version="v1.31.0"
	I0819 13:26:16.223600       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:26:16.227207       1 config.go:197] "Starting service config controller"
	I0819 13:26:16.227278       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 13:26:16.227320       1 config.go:104] "Starting endpoint slice config controller"
	I0819 13:26:16.227430       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 13:26:16.229093       1 config.go:326] "Starting node config controller"
	I0819 13:26:16.229137       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 13:26:16.327921       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 13:26:16.328027       1 shared_informer.go:320] Caches are synced for service config
	I0819 13:26:16.329524       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b2b4e2ac023acb57b647db26868a7ae3f222214bff0091ee55abbce19f2cc80a] <==
	W0819 13:26:07.370083       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 13:26:07.370112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:07.370182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 13:26:07.370211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:07.370184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:26:07.370257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:07.370305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 13:26:07.370383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:07.370507       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 13:26:07.370555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:08.223983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 13:26:08.224116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:08.226560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:26:08.226631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:08.418599       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 13:26:08.418697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:08.451501       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 13:26:08.451553       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:08.457262       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 13:26:08.457411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:08.614175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 13:26:08.614287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:08.907452       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 13:26:08.907547       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 13:26:12.063149       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 13:34:14 no-preload-441627 kubelet[3365]: E0819 13:34:14.049477    3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rzp96" podUID="de104a11-2674-4f73-875c-33c2a899a89d"
	Aug 19 13:34:20 no-preload-441627 kubelet[3365]: E0819 13:34:20.221079    3365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074460220568490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:20 no-preload-441627 kubelet[3365]: E0819 13:34:20.221133    3365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074460220568490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:27 no-preload-441627 kubelet[3365]: E0819 13:34:27.048448    3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rzp96" podUID="de104a11-2674-4f73-875c-33c2a899a89d"
	Aug 19 13:34:30 no-preload-441627 kubelet[3365]: E0819 13:34:30.224421    3365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074470223805574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:30 no-preload-441627 kubelet[3365]: E0819 13:34:30.224469    3365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074470223805574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:40 no-preload-441627 kubelet[3365]: E0819 13:34:40.227301    3365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074480226556135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:40 no-preload-441627 kubelet[3365]: E0819 13:34:40.227324    3365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074480226556135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:42 no-preload-441627 kubelet[3365]: E0819 13:34:42.048492    3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rzp96" podUID="de104a11-2674-4f73-875c-33c2a899a89d"
	Aug 19 13:34:50 no-preload-441627 kubelet[3365]: E0819 13:34:50.228763    3365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074490228503090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:50 no-preload-441627 kubelet[3365]: E0819 13:34:50.228806    3365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074490228503090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:34:53 no-preload-441627 kubelet[3365]: E0819 13:34:53.048423    3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rzp96" podUID="de104a11-2674-4f73-875c-33c2a899a89d"
	Aug 19 13:35:00 no-preload-441627 kubelet[3365]: E0819 13:35:00.230857    3365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074500230280809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:35:00 no-preload-441627 kubelet[3365]: E0819 13:35:00.230900    3365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074500230280809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:35:06 no-preload-441627 kubelet[3365]: E0819 13:35:06.051715    3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rzp96" podUID="de104a11-2674-4f73-875c-33c2a899a89d"
	Aug 19 13:35:10 no-preload-441627 kubelet[3365]: E0819 13:35:10.069265    3365 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 13:35:10 no-preload-441627 kubelet[3365]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 13:35:10 no-preload-441627 kubelet[3365]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 13:35:10 no-preload-441627 kubelet[3365]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 13:35:10 no-preload-441627 kubelet[3365]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 13:35:10 no-preload-441627 kubelet[3365]: E0819 13:35:10.233518    3365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074510233163383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:35:10 no-preload-441627 kubelet[3365]: E0819 13:35:10.233549    3365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074510233163383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:35:20 no-preload-441627 kubelet[3365]: E0819 13:35:20.048617    3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rzp96" podUID="de104a11-2674-4f73-875c-33c2a899a89d"
	Aug 19 13:35:20 no-preload-441627 kubelet[3365]: E0819 13:35:20.235657    3365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074520235076782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:35:20 no-preload-441627 kubelet[3365]: E0819 13:35:20.235697    3365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074520235076782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [5d764ddaa462f1d9020775d70fc5bc25f09ab09d3cf970fc1629194186cfd7b8] <==
	I0819 13:26:17.157205       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 13:26:17.216589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 13:26:17.220587       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 13:26:17.271477       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 13:26:17.271680       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-441627_25b14b2b-ba62-4368-b0a1-03d58b36820f!
	I0819 13:26:17.287504       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"93d6ee90-5ad2-49a8-92a3-1cdc99398092", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-441627_25b14b2b-ba62-4368-b0a1-03d58b36820f became leader
	I0819 13:26:17.372318       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-441627_25b14b2b-ba62-4368-b0a1-03d58b36820f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-441627 -n no-preload-441627
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-441627 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-rzp96
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-441627 describe pod metrics-server-6867b74b74-rzp96
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-441627 describe pod metrics-server-6867b74b74-rzp96: exit status 1 (63.23535ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-rzp96" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-441627 describe pod metrics-server-6867b74b74-rzp96: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:29:02.189257 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:29:07.092338 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:29:14.943380 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:29:27.644380 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:29:40.837619 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:29:58.879325 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:30:39.335569 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:30:50.708655 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:31:03.900774 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:31:08.804520 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:31:21.944909 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:31:26.667950 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:32:02.401623 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:32:05.265371 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:32:31.869848 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:32:44.026152 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:32:51.877872 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:34:02.189408 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:34:27.644259 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:34:40.837445 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:34:58.879316 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:35:39.336656 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:36:08.804662 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:36:26.668629 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-620467 -n old-k8s-version-620467
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-620467 -n old-k8s-version-620467: exit status 2 (243.348073ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-620467" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467: exit status 2 (233.913528ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-620467 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-620467 logs -n 25: (1.579793237s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-093052             | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-093052                  | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-093052 --memory=2200 --alsologtostderr   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-038572  | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC |                     |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-093052 image list                           | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-441627                  | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-620467        | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-823470            | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-038572       | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:24 UTC |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-620467             | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-823470                 | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC | 19 Aug 24 13:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 13:17:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 13:17:40.922225 2153523 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:17:40.922462 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922470 2153523 out.go:358] Setting ErrFile to fd 2...
	I0819 13:17:40.922475 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922631 2153523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:17:40.923154 2153523 out.go:352] Setting JSON to false
	I0819 13:17:40.924134 2153523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":165611,"bootTime":1723907850,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:17:40.924233 2153523 start.go:139] virtualization: kvm guest
	I0819 13:17:40.927254 2153523 out.go:177] * [embed-certs-823470] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:17:40.928415 2153523 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:17:40.928457 2153523 notify.go:220] Checking for updates...
	I0819 13:17:40.930421 2153523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:17:40.931440 2153523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:17:40.932454 2153523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:17:40.933815 2153523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:17:40.935189 2153523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:17:40.936761 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:17:40.937143 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.937209 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.952461 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0819 13:17:40.952868 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.953423 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.953455 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.953809 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.954009 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:40.954280 2153523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:17:40.954572 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.954610 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.970308 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0819 13:17:40.970901 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.971446 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.971478 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.971798 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.972002 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:41.006606 2153523 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 13:17:41.007689 2153523 start.go:297] selected driver: kvm2
	I0819 13:17:41.007712 2153523 start.go:901] validating driver "kvm2" against &{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.007878 2153523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:17:41.008964 2153523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.009068 2153523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:17:41.024433 2153523 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:17:41.024970 2153523 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:17:41.025027 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:17:41.025044 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:17:41.025112 2153523 start.go:340] cluster config:
	{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.025259 2153523 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.027074 2153523 out.go:177] * Starting "embed-certs-823470" primary control-plane node in "embed-certs-823470" cluster
	I0819 13:17:37.004434 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:41.028219 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:17:41.028273 2153523 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:17:41.028296 2153523 cache.go:56] Caching tarball of preloaded images
	I0819 13:17:41.028407 2153523 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:17:41.028424 2153523 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 13:17:41.028580 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:17:41.028861 2153523 start.go:360] acquireMachinesLock for embed-certs-823470: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:17:43.084440 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:46.156450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:52.236466 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:55.308454 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:01.388472 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:04.460482 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:10.540493 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:13.612449 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:19.692412 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:22.764436 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:28.844443 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:31.916476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:37.996470 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:41.068499 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:47.148488 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:50.220507 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:56.300450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:59.372511 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:05.452504 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:08.524521 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:14.604456 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:17.676406 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:23.756476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:26.828515 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:32.908445 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:35.980361 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:38.985083 2152365 start.go:364] duration metric: took 4m0.580875322s to acquireMachinesLock for "default-k8s-diff-port-038572"
	I0819 13:19:38.985152 2152365 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:38.985171 2152365 fix.go:54] fixHost starting: 
	I0819 13:19:38.985542 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:38.985578 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:39.001015 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I0819 13:19:39.001515 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:39.002024 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:19:39.002048 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:39.002379 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:39.002572 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:39.002734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:19:39.004320 2152365 fix.go:112] recreateIfNeeded on default-k8s-diff-port-038572: state=Stopped err=<nil>
	I0819 13:19:39.004361 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	W0819 13:19:39.004545 2152365 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:39.006073 2152365 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-038572" ...
	I0819 13:19:38.982266 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:38.982325 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982657 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:19:38.982688 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982924 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:19:38.984929 2151869 machine.go:96] duration metric: took 4m37.397817873s to provisionDockerMachine
	I0819 13:19:38.984974 2151869 fix.go:56] duration metric: took 4m37.421262151s for fixHost
	I0819 13:19:38.984981 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 4m37.421326953s
	W0819 13:19:38.985006 2151869 start.go:714] error starting host: provision: host is not running
	W0819 13:19:38.985119 2151869 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 13:19:38.985132 2151869 start.go:729] Will try again in 5 seconds ...
	I0819 13:19:39.007314 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Start
	I0819 13:19:39.007497 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring networks are active...
	I0819 13:19:39.008358 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network default is active
	I0819 13:19:39.008748 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network mk-default-k8s-diff-port-038572 is active
	I0819 13:19:39.009091 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Getting domain xml...
	I0819 13:19:39.009732 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Creating domain...
	I0819 13:19:40.228941 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting to get IP...
	I0819 13:19:40.229930 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230312 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230369 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.230279 2153980 retry.go:31] will retry after 222.94238ms: waiting for machine to come up
	I0819 13:19:40.455010 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455640 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.455562 2153980 retry.go:31] will retry after 314.241366ms: waiting for machine to come up
	I0819 13:19:40.771105 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771564 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771593 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.771511 2153980 retry.go:31] will retry after 475.191571ms: waiting for machine to come up
	I0819 13:19:41.248037 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248436 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248462 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.248377 2153980 retry.go:31] will retry after 547.125638ms: waiting for machine to come up
	I0819 13:19:41.797263 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797702 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.797653 2153980 retry.go:31] will retry after 581.6163ms: waiting for machine to come up
	I0819 13:19:42.380743 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381257 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:42.381195 2153980 retry.go:31] will retry after 661.080111ms: waiting for machine to come up
	I0819 13:19:43.044143 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044615 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044645 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.044566 2153980 retry.go:31] will retry after 815.359371ms: waiting for machine to come up
	I0819 13:19:43.986791 2151869 start.go:360] acquireMachinesLock for no-preload-441627: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:19:43.861792 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862277 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.862175 2153980 retry.go:31] will retry after 1.076851262s: waiting for machine to come up
	I0819 13:19:44.940944 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941544 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:44.941431 2153980 retry.go:31] will retry after 1.492394345s: waiting for machine to come up
	I0819 13:19:46.436200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436720 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:46.436617 2153980 retry.go:31] will retry after 1.567728586s: waiting for machine to come up
	I0819 13:19:48.006390 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006917 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:48.006810 2153980 retry.go:31] will retry after 1.799313771s: waiting for machine to come up
	I0819 13:19:49.807716 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808103 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808135 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:49.808029 2153980 retry.go:31] will retry after 2.910106345s: waiting for machine to come up
	I0819 13:19:52.722235 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722644 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:52.722554 2153980 retry.go:31] will retry after 2.846975794s: waiting for machine to come up
	I0819 13:19:56.801408 2152780 start.go:364] duration metric: took 3m10.884201023s to acquireMachinesLock for "old-k8s-version-620467"
	I0819 13:19:56.801488 2152780 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:56.801498 2152780 fix.go:54] fixHost starting: 
	I0819 13:19:56.802005 2152780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:56.802046 2152780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:56.819893 2152780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0819 13:19:56.820323 2152780 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:56.820785 2152780 main.go:141] libmachine: Using API Version  1
	I0819 13:19:56.820813 2152780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:56.821126 2152780 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:56.821314 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:19:56.821452 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetState
	I0819 13:19:56.823015 2152780 fix.go:112] recreateIfNeeded on old-k8s-version-620467: state=Stopped err=<nil>
	I0819 13:19:56.823056 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	W0819 13:19:56.823300 2152780 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:56.824822 2152780 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-620467" ...
	I0819 13:19:55.571761 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572317 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Found IP for machine: 192.168.72.69
	I0819 13:19:55.572351 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has current primary IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserving static IP address...
	I0819 13:19:55.572819 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.572863 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | skip adding static IP to network mk-default-k8s-diff-port-038572 - found existing host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"}
	I0819 13:19:55.572879 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserved static IP address: 192.168.72.69
	I0819 13:19:55.572897 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for SSH to be available...
	I0819 13:19:55.572914 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Getting to WaitForSSH function...
	I0819 13:19:55.575208 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575636 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.575662 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575751 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH client type: external
	I0819 13:19:55.575797 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa (-rw-------)
	I0819 13:19:55.575834 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:19:55.575851 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | About to run SSH command:
	I0819 13:19:55.575895 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | exit 0
	I0819 13:19:55.696082 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | SSH cmd err, output: <nil>: 
	I0819 13:19:55.696501 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetConfigRaw
	I0819 13:19:55.697226 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:55.699788 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700112 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.700146 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700429 2152365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/config.json ...
	I0819 13:19:55.700717 2152365 machine.go:93] provisionDockerMachine start ...
	I0819 13:19:55.700742 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:55.700975 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.703239 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703521 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.703556 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.703887 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704055 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.704360 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.704635 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.704651 2152365 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:19:55.804602 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:19:55.804632 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.804922 2152365 buildroot.go:166] provisioning hostname "default-k8s-diff-port-038572"
	I0819 13:19:55.804951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.805155 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.807793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.808144 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808318 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.808510 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808641 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808750 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.808888 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.809108 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.809123 2152365 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-038572 && echo "default-k8s-diff-port-038572" | sudo tee /etc/hostname
	I0819 13:19:55.922378 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-038572
	
	I0819 13:19:55.922428 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.925767 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926131 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.926172 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926350 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.926563 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926747 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.927130 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.927403 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.927433 2152365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-038572' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-038572/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-038572' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:19:56.033031 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:56.033082 2152365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:19:56.033108 2152365 buildroot.go:174] setting up certificates
	I0819 13:19:56.033121 2152365 provision.go:84] configureAuth start
	I0819 13:19:56.033132 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:56.033450 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.036371 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.036898 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.036926 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.037063 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.039494 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039814 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.039844 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039962 2152365 provision.go:143] copyHostCerts
	I0819 13:19:56.040055 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:19:56.040073 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:19:56.040181 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:19:56.040329 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:19:56.040342 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:19:56.040394 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:19:56.040582 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:19:56.040611 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:19:56.040666 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:19:56.040752 2152365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-038572 san=[127.0.0.1 192.168.72.69 default-k8s-diff-port-038572 localhost minikube]
	I0819 13:19:56.155224 2152365 provision.go:177] copyRemoteCerts
	I0819 13:19:56.155322 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:19:56.155366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.157977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158360 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.158393 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158573 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.158780 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.159002 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.159203 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.238700 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:19:56.263555 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0819 13:19:56.288212 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:19:56.311817 2152365 provision.go:87] duration metric: took 278.679594ms to configureAuth
	I0819 13:19:56.311851 2152365 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:19:56.312067 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:19:56.312187 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.315149 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315598 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.315630 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315801 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.315977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316115 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316244 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.316426 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.316653 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.316770 2152365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:19:56.575936 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:19:56.575977 2152365 machine.go:96] duration metric: took 875.241726ms to provisionDockerMachine
	I0819 13:19:56.576012 2152365 start.go:293] postStartSetup for "default-k8s-diff-port-038572" (driver="kvm2")
	I0819 13:19:56.576030 2152365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:19:56.576068 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.576426 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:19:56.576468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.579017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579420 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.579453 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579553 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.579726 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.579891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.580030 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.661017 2152365 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:19:56.665287 2152365 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:19:56.665315 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:19:56.665387 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:19:56.665469 2152365 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:19:56.665559 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:19:56.675115 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:19:56.698728 2152365 start.go:296] duration metric: took 122.697439ms for postStartSetup
	I0819 13:19:56.698771 2152365 fix.go:56] duration metric: took 17.713602497s for fixHost
	I0819 13:19:56.698793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.701384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701733 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.701766 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.702116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702284 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702444 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.702625 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.702822 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.702836 2152365 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:19:56.801168 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073596.772802095
	
	I0819 13:19:56.801210 2152365 fix.go:216] guest clock: 1724073596.772802095
	I0819 13:19:56.801223 2152365 fix.go:229] Guest: 2024-08-19 13:19:56.772802095 +0000 UTC Remote: 2024-08-19 13:19:56.698775345 +0000 UTC m=+258.438276725 (delta=74.02675ms)
	I0819 13:19:56.801278 2152365 fix.go:200] guest clock delta is within tolerance: 74.02675ms
	I0819 13:19:56.801287 2152365 start.go:83] releasing machines lock for "default-k8s-diff-port-038572", held for 17.81615508s
	I0819 13:19:56.801326 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.801669 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.804805 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805198 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.805242 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805377 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806035 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806291 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806387 2152365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:19:56.806442 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.806562 2152365 ssh_runner.go:195] Run: cat /version.json
	I0819 13:19:56.806592 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.809097 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809446 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.809476 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809513 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809599 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.809776 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.809951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810028 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.810053 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.810115 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.810256 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.810456 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.810627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810777 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.905573 2152365 ssh_runner.go:195] Run: systemctl --version
	I0819 13:19:56.911395 2152365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:19:57.053460 2152365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:19:57.060012 2152365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:19:57.060091 2152365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:19:57.076651 2152365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:19:57.076678 2152365 start.go:495] detecting cgroup driver to use...
	I0819 13:19:57.076746 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:19:57.095982 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:19:57.109198 2152365 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:19:57.109282 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:19:57.124552 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:19:57.141665 2152365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:19:57.250580 2152365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:19:57.395902 2152365 docker.go:233] disabling docker service ...
	I0819 13:19:57.395986 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:19:57.411232 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:19:57.424263 2152365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:19:57.576953 2152365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:19:57.695946 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:19:57.709501 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:19:57.727815 2152365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:19:57.727889 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.738210 2152365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:19:57.738284 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.748534 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.759949 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.770744 2152365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:19:57.781844 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.792047 2152365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.810916 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.826741 2152365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:19:57.839687 2152365 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:19:57.839753 2152365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:19:57.855192 2152365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:19:57.865326 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:19:57.991646 2152365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:19:58.136425 2152365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:19:58.136522 2152365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:19:58.141971 2152365 start.go:563] Will wait 60s for crictl version
	I0819 13:19:58.142047 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:19:58.146029 2152365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:19:58.182474 2152365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:19:58.182580 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.212887 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.242578 2152365 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:19:58.243711 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:58.246652 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.246985 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:58.247031 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.247262 2152365 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 13:19:58.251445 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:19:58.266177 2152365 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:19:58.266324 2152365 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:19:58.266417 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:19:56.826004 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .Start
	I0819 13:19:56.826161 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring networks are active...
	I0819 13:19:56.826953 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network default is active
	I0819 13:19:56.827328 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network mk-old-k8s-version-620467 is active
	I0819 13:19:56.827789 2152780 main.go:141] libmachine: (old-k8s-version-620467) Getting domain xml...
	I0819 13:19:56.828485 2152780 main.go:141] libmachine: (old-k8s-version-620467) Creating domain...
	I0819 13:19:58.099058 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting to get IP...
	I0819 13:19:58.099866 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.100256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.100359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.100246 2154107 retry.go:31] will retry after 217.468132ms: waiting for machine to come up
	I0819 13:19:58.319821 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.320335 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.320359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.320295 2154107 retry.go:31] will retry after 270.947251ms: waiting for machine to come up
	I0819 13:19:58.592897 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.593586 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.593610 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.593526 2154107 retry.go:31] will retry after 389.864812ms: waiting for machine to come up
	I0819 13:19:58.985269 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.985722 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.985757 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.985674 2154107 retry.go:31] will retry after 559.916587ms: waiting for machine to come up
	I0819 13:19:59.547846 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:59.548504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:59.548538 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:59.548446 2154107 retry.go:31] will retry after 759.198751ms: waiting for machine to come up
	I0819 13:20:00.309569 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:00.310093 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:00.310126 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:00.310041 2154107 retry.go:31] will retry after 836.488095ms: waiting for machine to come up
	I0819 13:19:58.306336 2152365 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:19:58.306490 2152365 ssh_runner.go:195] Run: which lz4
	I0819 13:19:58.311147 2152365 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:19:58.315715 2152365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:19:58.315751 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:19:59.751536 2152365 crio.go:462] duration metric: took 1.440444093s to copy over tarball
	I0819 13:19:59.751611 2152365 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:01.909604 2152365 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157959479s)
	I0819 13:20:01.909646 2152365 crio.go:469] duration metric: took 2.158078046s to extract the tarball
	I0819 13:20:01.909667 2152365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:01.954789 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:02.002755 2152365 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:02.002786 2152365 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:02.002805 2152365 kubeadm.go:934] updating node { 192.168.72.69 8444 v1.31.0 crio true true} ...
	I0819 13:20:02.002970 2152365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-038572 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:02.003061 2152365 ssh_runner.go:195] Run: crio config
	I0819 13:20:02.061207 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:02.061238 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:02.061256 2152365 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:02.061295 2152365 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.69 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-038572 NodeName:default-k8s-diff-port-038572 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:02.061496 2152365 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.69
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-038572"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:02.061628 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:02.072573 2152365 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:02.072676 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:02.083114 2152365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0819 13:20:02.100736 2152365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:02.117359 2152365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 13:20:02.135642 2152365 ssh_runner.go:195] Run: grep 192.168.72.69	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:02.139682 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:02.152242 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:02.278949 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:02.296896 2152365 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572 for IP: 192.168.72.69
	I0819 13:20:02.296925 2152365 certs.go:194] generating shared ca certs ...
	I0819 13:20:02.296949 2152365 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:02.297166 2152365 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:02.297230 2152365 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:02.297248 2152365 certs.go:256] generating profile certs ...
	I0819 13:20:02.297375 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/client.key
	I0819 13:20:02.297441 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key.ca317f39
	I0819 13:20:02.297489 2152365 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key
	I0819 13:20:02.297641 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:02.297692 2152365 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:02.297705 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:02.297730 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:02.297752 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:02.297778 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:02.297821 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:02.298812 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:02.332660 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:02.376995 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:02.418243 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:02.455354 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 13:20:02.488839 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:02.514418 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:02.540797 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:20:02.565628 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:02.590617 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:02.614942 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:02.639455 2152365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:02.657597 2152365 ssh_runner.go:195] Run: openssl version
	I0819 13:20:02.663545 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:02.674396 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679061 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679138 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.685136 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:02.696428 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:02.707271 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.711988 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.712052 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.717933 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:02.728982 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:02.739866 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744728 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744799 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.750660 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:02.761203 2152365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:02.765996 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:02.772371 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:02.778453 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:02.784645 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:02.790342 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:02.795882 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:02.801355 2152365 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:02.801448 2152365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:02.801536 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.839586 2152365 cri.go:89] found id: ""
	I0819 13:20:02.839684 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:02.850336 2152365 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:02.850363 2152365 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:02.850424 2152365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:02.860676 2152365 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:02.861753 2152365 kubeconfig.go:125] found "default-k8s-diff-port-038572" server: "https://192.168.72.69:8444"
	I0819 13:20:02.863979 2152365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:02.875284 2152365 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.69
	I0819 13:20:02.875321 2152365 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:02.875337 2152365 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:02.875398 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.914277 2152365 cri.go:89] found id: ""
	I0819 13:20:02.914383 2152365 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:02.931976 2152365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:02.941897 2152365 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:02.941924 2152365 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:02.941994 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 13:20:02.951103 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:02.951179 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:02.960642 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 13:20:02.970204 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:02.970278 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:02.980235 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.989306 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:02.989371 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.998821 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 13:20:03.007907 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:03.007978 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:03.017698 2152365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:03.027048 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:03.158491 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:01.148388 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:01.148845 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:01.148874 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:01.148801 2154107 retry.go:31] will retry after 1.045849266s: waiting for machine to come up
	I0819 13:20:02.196768 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:02.197307 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:02.197341 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:02.197243 2154107 retry.go:31] will retry after 1.09397017s: waiting for machine to come up
	I0819 13:20:03.292733 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:03.293195 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:03.293233 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:03.293141 2154107 retry.go:31] will retry after 1.29393597s: waiting for machine to come up
	I0819 13:20:04.588965 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:04.589429 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:04.589463 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:04.589356 2154107 retry.go:31] will retry after 1.963033352s: waiting for machine to come up
	I0819 13:20:04.318435 2152365 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.159898195s)
	I0819 13:20:04.318475 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.536005 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.600370 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.701830 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:04.701954 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.202875 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.702925 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.718529 2152365 api_server.go:72] duration metric: took 1.01671668s to wait for apiserver process to appear ...
	I0819 13:20:05.718570 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:05.718598 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.393400 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.393440 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.393459 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.434960 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.434992 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.719450 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.725655 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:08.725689 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.218770 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.224234 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:09.224273 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.718902 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.727048 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:20:09.733609 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:09.733641 2152365 api_server.go:131] duration metric: took 4.015063389s to wait for apiserver health ...
	I0819 13:20:09.733652 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:09.733660 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:09.735422 2152365 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:06.553692 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:06.554176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:06.554208 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:06.554119 2154107 retry.go:31] will retry after 2.420122298s: waiting for machine to come up
	I0819 13:20:08.975682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:08.976129 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:08.976176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:08.976073 2154107 retry.go:31] will retry after 2.64333029s: waiting for machine to come up
	I0819 13:20:09.736901 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:09.747568 2152365 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:09.766156 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:09.777905 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:09.777944 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:09.777953 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:09.777963 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:09.777978 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:09.777990 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:20:09.778006 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:09.778017 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:09.778029 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:20:09.778040 2152365 system_pods.go:74] duration metric: took 11.855811ms to wait for pod list to return data ...
	I0819 13:20:09.778050 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:09.787946 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:09.787991 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:09.788010 2152365 node_conditions.go:105] duration metric: took 9.95462ms to run NodePressure ...
	I0819 13:20:09.788035 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:10.079231 2152365 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083839 2152365 kubeadm.go:739] kubelet initialised
	I0819 13:20:10.083865 2152365 kubeadm.go:740] duration metric: took 4.599976ms waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083876 2152365 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:10.089020 2152365 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.094701 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094738 2152365 pod_ready.go:82] duration metric: took 5.686754ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.094751 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094760 2152365 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.099257 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099292 2152365 pod_ready.go:82] duration metric: took 4.514562ms for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.099308 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099317 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.103745 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103778 2152365 pod_ready.go:82] duration metric: took 4.449246ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.103799 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103811 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.170603 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170636 2152365 pod_ready.go:82] duration metric: took 66.80935ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.170649 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170655 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.569837 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569869 2152365 pod_ready.go:82] duration metric: took 399.203757ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.569879 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569892 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.970515 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970553 2152365 pod_ready.go:82] duration metric: took 400.651091ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.970571 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970600 2152365 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:11.371258 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371306 2152365 pod_ready.go:82] duration metric: took 400.692872ms for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:11.371324 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371335 2152365 pod_ready.go:39] duration metric: took 1.287441542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:11.371365 2152365 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:20:11.387535 2152365 ops.go:34] apiserver oom_adj: -16
	I0819 13:20:11.387559 2152365 kubeadm.go:597] duration metric: took 8.537187666s to restartPrimaryControlPlane
	I0819 13:20:11.387568 2152365 kubeadm.go:394] duration metric: took 8.586226152s to StartCluster
	I0819 13:20:11.387585 2152365 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.387664 2152365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:11.389218 2152365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.389474 2152365 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:20:11.389580 2152365 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:20:11.389706 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:11.389711 2152365 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389755 2152365 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-038572"
	I0819 13:20:11.389764 2152365 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-038572"
	W0819 13:20:11.389767 2152365 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:20:11.389751 2152365 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389804 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.389806 2152365 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.389818 2152365 addons.go:243] addon metrics-server should already be in state true
	I0819 13:20:11.389827 2152365 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-038572"
	I0819 13:20:11.389849 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.390217 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390246 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390254 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390283 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390305 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390341 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.391417 2152365 out.go:177] * Verifying Kubernetes components...
	I0819 13:20:11.392645 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:11.406609 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0819 13:20:11.407081 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.407694 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.407723 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.408143 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.408421 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.411206 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0819 13:20:11.411468 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0819 13:20:11.411740 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.411900 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.412296 2152365 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.412319 2152365 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:20:11.412335 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412351 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.412393 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412482 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412508 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412744 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.412758 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.412793 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.412794 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.413326 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413350 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413374 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.413399 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.429280 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40269
	I0819 13:20:11.429783 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.430330 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.430354 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.430797 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.430981 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.433018 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.433214 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I0819 13:20:11.433651 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.433708 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0819 13:20:11.434138 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434152 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434199 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.434455 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.434636 2152365 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:11.434806 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434830 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434988 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.435019 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.435164 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.435375 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.435924 2152365 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.435944 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:20:11.435965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.437309 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.438592 2152365 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:20:11.439482 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.439584 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:20:11.439598 2152365 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:20:11.439617 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.439905 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.439943 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.440077 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.440243 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.440399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.440517 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.442627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.442965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.442990 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.443134 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.443274 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.443500 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.443627 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.457318 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0819 13:20:11.457870 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.458444 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.458481 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.458854 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.459017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.460870 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.461105 2152365 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.461121 2152365 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:20:11.461138 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.463902 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.464415 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464581 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.464764 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.464942 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.465089 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.620783 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:11.641158 2152365 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:11.745067 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.783221 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:20:11.783260 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:20:11.791134 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.848325 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:20:11.848359 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:20:11.955559 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:11.955670 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:20:12.022022 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:12.892823 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.101637914s)
	I0819 13:20:12.892896 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.892909 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.892966 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147858081s)
	I0819 13:20:12.893019 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893033 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893201 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893327 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893354 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893366 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893375 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893377 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893386 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893391 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893630 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893681 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893704 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893677 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893729 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893657 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.899954 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.899970 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.900202 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.900220 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.900223 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972429 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972465 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.972842 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972866 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.972882 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.972897 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.973181 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.973225 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.973235 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.973289 2152365 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-038572"
	I0819 13:20:12.975602 2152365 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:20:12.976614 2152365 addons.go:510] duration metric: took 1.587039687s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:20:15.704949 2153523 start.go:364] duration metric: took 2m34.676043713s to acquireMachinesLock for "embed-certs-823470"
	I0819 13:20:15.705034 2153523 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:15.705058 2153523 fix.go:54] fixHost starting: 
	I0819 13:20:15.705598 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:15.705649 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:15.726423 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0819 13:20:15.726876 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:15.727565 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:20:15.727603 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:15.727986 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:15.728273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:15.728457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:20:15.730123 2153523 fix.go:112] recreateIfNeeded on embed-certs-823470: state=Stopped err=<nil>
	I0819 13:20:15.730167 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	W0819 13:20:15.730354 2153523 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:15.732004 2153523 out.go:177] * Restarting existing kvm2 VM for "embed-certs-823470" ...
	I0819 13:20:11.621327 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:11.621751 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:11.621783 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:11.621716 2154107 retry.go:31] will retry after 2.756598641s: waiting for machine to come up
	I0819 13:20:14.379562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380111 2152780 main.go:141] libmachine: (old-k8s-version-620467) Found IP for machine: 192.168.39.238
	I0819 13:20:14.380135 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserving static IP address...
	I0819 13:20:14.380149 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has current primary IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380616 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.380648 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserved static IP address: 192.168.39.238
	I0819 13:20:14.380673 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | skip adding static IP to network mk-old-k8s-version-620467 - found existing host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"}
	I0819 13:20:14.380687 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Getting to WaitForSSH function...
	I0819 13:20:14.380699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting for SSH to be available...
	I0819 13:20:14.383088 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383484 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.383516 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383650 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH client type: external
	I0819 13:20:14.383682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa (-rw-------)
	I0819 13:20:14.383720 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:14.383731 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | About to run SSH command:
	I0819 13:20:14.383745 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | exit 0
	I0819 13:20:14.508402 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:14.508766 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetConfigRaw
	I0819 13:20:14.509427 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.512053 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.512573 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512759 2152780 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/config.json ...
	I0819 13:20:14.512944 2152780 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:14.512962 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:14.513159 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.515266 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515554 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.515588 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515743 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.515901 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516039 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516142 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.516308 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.516582 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.516597 2152780 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:14.628611 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:14.628650 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.628968 2152780 buildroot.go:166] provisioning hostname "old-k8s-version-620467"
	I0819 13:20:14.629003 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.629202 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.631844 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632148 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.632211 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632516 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.632706 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632870 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632975 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.633140 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.633383 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.633400 2152780 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-620467 && echo "old-k8s-version-620467" | sudo tee /etc/hostname
	I0819 13:20:14.759803 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-620467
	
	I0819 13:20:14.759840 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.763058 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763513 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.763553 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763807 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.764027 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764208 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764523 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.764729 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.764986 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.765013 2152780 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-620467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-620467/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-620467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:14.890791 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:14.890828 2152780 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:14.890861 2152780 buildroot.go:174] setting up certificates
	I0819 13:20:14.890877 2152780 provision.go:84] configureAuth start
	I0819 13:20:14.890890 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.891337 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.894171 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894527 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.894562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894695 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.897240 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.897791 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.897825 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.898011 2152780 provision.go:143] copyHostCerts
	I0819 13:20:14.898080 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:14.898104 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:14.898169 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:14.898317 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:14.898330 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:14.898360 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:14.898434 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:14.898444 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:14.898472 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:14.898542 2152780 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-620467 san=[127.0.0.1 192.168.39.238 localhost minikube old-k8s-version-620467]
	I0819 13:20:15.009796 2152780 provision.go:177] copyRemoteCerts
	I0819 13:20:15.009880 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:15.009931 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.012857 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013153 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.013187 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013486 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.013699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.013862 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.013990 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.104066 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:15.131682 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 13:20:15.158717 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:15.186486 2152780 provision.go:87] duration metric: took 295.591794ms to configureAuth
	I0819 13:20:15.186518 2152780 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:15.186739 2152780 config.go:182] Loaded profile config "old-k8s-version-620467": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 13:20:15.186833 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.189580 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.189997 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.190028 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.190212 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.190389 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190528 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190646 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.190784 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.190958 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.190973 2152780 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:15.461836 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:15.461866 2152780 machine.go:96] duration metric: took 948.910153ms to provisionDockerMachine
	I0819 13:20:15.461878 2152780 start.go:293] postStartSetup for "old-k8s-version-620467" (driver="kvm2")
	I0819 13:20:15.461894 2152780 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:15.461913 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.462313 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:15.462348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.465294 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465707 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.465743 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465949 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.466136 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.466302 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.466443 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.551428 2152780 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:15.556041 2152780 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:15.556063 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:15.556137 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:15.556267 2152780 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:15.556410 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:15.565971 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:15.592103 2152780 start.go:296] duration metric: took 130.206992ms for postStartSetup
	I0819 13:20:15.592153 2152780 fix.go:56] duration metric: took 18.790655174s for fixHost
	I0819 13:20:15.592201 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.595223 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595583 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.595623 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595769 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.596033 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596249 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596391 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.596578 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.596767 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.596780 2152780 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:15.704773 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073615.677156491
	
	I0819 13:20:15.704802 2152780 fix.go:216] guest clock: 1724073615.677156491
	I0819 13:20:15.704811 2152780 fix.go:229] Guest: 2024-08-19 13:20:15.677156491 +0000 UTC Remote: 2024-08-19 13:20:15.592158206 +0000 UTC m=+209.822213223 (delta=84.998285ms)
	I0819 13:20:15.704854 2152780 fix.go:200] guest clock delta is within tolerance: 84.998285ms
	I0819 13:20:15.704862 2152780 start.go:83] releasing machines lock for "old-k8s-version-620467", held for 18.903410478s
	I0819 13:20:15.704893 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.705211 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:15.708190 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708528 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.708558 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708733 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709538 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709631 2152780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:15.709682 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.709793 2152780 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:15.709821 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.712366 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712759 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.712787 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712818 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712943 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713113 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713309 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.713374 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.713407 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.713481 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.713573 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713714 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713872 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.714033 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.732993 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Start
	I0819 13:20:15.733180 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring networks are active...
	I0819 13:20:15.734170 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network default is active
	I0819 13:20:15.734566 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network mk-embed-certs-823470 is active
	I0819 13:20:15.734992 2153523 main.go:141] libmachine: (embed-certs-823470) Getting domain xml...
	I0819 13:20:15.735675 2153523 main.go:141] libmachine: (embed-certs-823470) Creating domain...
	I0819 13:20:15.818839 2152780 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:15.827091 2152780 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:15.989552 2152780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:15.995991 2152780 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:15.996067 2152780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:16.012594 2152780 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:16.012622 2152780 start.go:495] detecting cgroup driver to use...
	I0819 13:20:16.012691 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:16.030071 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:16.048651 2152780 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:16.048715 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:16.063500 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:16.078370 2152780 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:16.194353 2152780 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:16.347921 2152780 docker.go:233] disabling docker service ...
	I0819 13:20:16.347992 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:16.363528 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:16.377515 2152780 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:16.526660 2152780 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:16.672207 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:16.686323 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:16.707555 2152780 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 13:20:16.707634 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.718679 2152780 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:16.718775 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.729446 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.740145 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.751813 2152780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:16.767680 2152780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:16.779680 2152780 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:16.779754 2152780 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:16.798205 2152780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:16.812684 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:16.999666 2152780 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:17.159585 2152780 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:17.159673 2152780 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:17.165707 2152780 start.go:563] Will wait 60s for crictl version
	I0819 13:20:17.165781 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:17.170949 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:17.222170 2152780 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:17.222248 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.250627 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.278578 2152780 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 13:20:13.644589 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:15.645507 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:18.146247 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:17.279632 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:17.282855 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:17.283289 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283501 2152780 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:17.288251 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:17.301778 2152780 kubeadm.go:883] updating cluster {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:17.301950 2152780 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 13:20:17.302022 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:17.348389 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:17.348477 2152780 ssh_runner.go:195] Run: which lz4
	I0819 13:20:17.352759 2152780 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:17.357046 2152780 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:17.357075 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 13:20:19.106587 2152780 crio.go:462] duration metric: took 1.753868264s to copy over tarball
	I0819 13:20:19.106686 2152780 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:17.131501 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting to get IP...
	I0819 13:20:17.132786 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.133215 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.133322 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.133191 2154333 retry.go:31] will retry after 299.991818ms: waiting for machine to come up
	I0819 13:20:17.434764 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.435181 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.435207 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.435148 2154333 retry.go:31] will retry after 236.856343ms: waiting for machine to come up
	I0819 13:20:17.673895 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.674570 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.674599 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.674464 2154333 retry.go:31] will retry after 393.585081ms: waiting for machine to come up
	I0819 13:20:18.070571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.071042 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.071102 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.070993 2154333 retry.go:31] will retry after 544.808617ms: waiting for machine to come up
	I0819 13:20:18.617933 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.618430 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.618462 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.618383 2154333 retry.go:31] will retry after 679.633919ms: waiting for machine to come up
	I0819 13:20:19.299719 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.300300 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.300332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.300244 2154333 retry.go:31] will retry after 612.319499ms: waiting for machine to come up
	I0819 13:20:19.914955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.915514 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.915550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.915448 2154333 retry.go:31] will retry after 831.709673ms: waiting for machine to come up
	I0819 13:20:20.748907 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:20.749441 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:20.749470 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:20.749386 2154333 retry.go:31] will retry after 1.039050841s: waiting for machine to come up
	I0819 13:20:19.147118 2152365 node_ready.go:49] node "default-k8s-diff-port-038572" has status "Ready":"True"
	I0819 13:20:19.147148 2152365 node_ready.go:38] duration metric: took 7.505954816s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:19.147159 2152365 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:19.156331 2152365 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.164963 2152365 pod_ready.go:93] pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:19.164994 2152365 pod_ready.go:82] duration metric: took 8.624394ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.165007 2152365 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:21.171762 2152365 pod_ready.go:103] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:22.814104 2152365 pod_ready.go:93] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.814140 2152365 pod_ready.go:82] duration metric: took 3.649123155s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.814157 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820468 2152365 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.820494 2152365 pod_ready.go:82] duration metric: took 6.328831ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820505 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826256 2152365 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.826288 2152365 pod_ready.go:82] duration metric: took 5.775237ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826302 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832133 2152365 pod_ready.go:93] pod "kube-proxy-wnkk2" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.832182 2152365 pod_ready.go:82] duration metric: took 5.850742ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832198 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837910 2152365 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.837938 2152365 pod_ready.go:82] duration metric: took 5.730926ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837952 2152365 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.098419 2152780 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.991694091s)
	I0819 13:20:22.098466 2152780 crio.go:469] duration metric: took 2.991838198s to extract the tarball
	I0819 13:20:22.098476 2152780 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:22.143215 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:22.183884 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:22.183917 2152780 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:22.183992 2152780 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.184012 2152780 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.184040 2152780 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 13:20:22.184050 2152780 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.183999 2152780 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.184002 2152780 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.184087 2152780 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.184133 2152780 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185819 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.185861 2152780 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 13:20:22.185868 2152780 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.185909 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.185940 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185825 2152780 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.355834 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.356969 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.361175 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.382402 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.393972 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.417644 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 13:20:22.451269 2152780 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 13:20:22.451335 2152780 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.451333 2152780 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 13:20:22.451374 2152780 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.451393 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.451428 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.479570 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.490369 2152780 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 13:20:22.490426 2152780 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.490484 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.519930 2152780 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 13:20:22.519990 2152780 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.520062 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.527794 2152780 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 13:20:22.527836 2152780 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.527922 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540287 2152780 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 13:20:22.540337 2152780 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 13:20:22.540362 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.540381 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540423 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665204 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.665271 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.665317 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.665410 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665441 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.665481 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.725932 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.821227 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.825977 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.826179 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.826276 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.826365 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.826517 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.908181 2152780 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 13:20:22.908249 2152780 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.908306 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.974136 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:23.008192 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 13:20:23.008282 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:23.008318 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:23.008372 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 13:20:23.008406 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:23.008416 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.041323 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 13:20:23.119173 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 13:20:23.119212 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.119216 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 13:20:23.119787 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 13:20:23.158596 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.193414 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 13:20:23.193510 2152780 cache_images.go:92] duration metric: took 1.009576721s to LoadCachedImages
	W0819 13:20:23.193613 2152780 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0819 13:20:23.193632 2152780 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.20.0 crio true true} ...
	I0819 13:20:23.193746 2152780 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-620467 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:23.193835 2152780 ssh_runner.go:195] Run: crio config
	I0819 13:20:23.255630 2152780 cni.go:84] Creating CNI manager for ""
	I0819 13:20:23.255661 2152780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:23.255677 2152780 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:23.255711 2152780 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-620467 NodeName:old-k8s-version-620467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 13:20:23.255897 2152780 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-620467"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:23.255980 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 13:20:23.269531 2152780 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:23.269612 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:23.282542 2152780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0819 13:20:23.302755 2152780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:23.322417 2152780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 13:20:23.342984 2152780 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:23.347053 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:23.360095 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:23.497495 2152780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:23.517984 2152780 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467 for IP: 192.168.39.238
	I0819 13:20:23.518059 2152780 certs.go:194] generating shared ca certs ...
	I0819 13:20:23.518125 2152780 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:23.518359 2152780 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:23.518429 2152780 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:23.518442 2152780 certs.go:256] generating profile certs ...
	I0819 13:20:23.518572 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.key
	I0819 13:20:23.518651 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key.53723c14
	I0819 13:20:23.518701 2152780 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key
	I0819 13:20:23.518856 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:23.518895 2152780 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:23.518909 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:23.518941 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:23.518969 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:23.519009 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:23.519073 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:23.519828 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:23.574690 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:23.616732 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:23.659531 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:23.706724 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 13:20:23.742394 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:23.774793 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:23.801731 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:23.827344 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:23.855338 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:23.880700 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:23.906324 2152780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:23.923792 2152780 ssh_runner.go:195] Run: openssl version
	I0819 13:20:23.929722 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:23.940623 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946214 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946316 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.952580 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:23.963550 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:23.974639 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979429 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979515 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.985465 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:23.996822 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:24.008876 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013845 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013909 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.019736 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:24.032573 2152780 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:24.038620 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:24.046070 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:24.052573 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:24.059242 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:24.065487 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:24.072116 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:24.078756 2152780 kubeadm.go:392] StartCluster: {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:24.078878 2152780 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:24.078950 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.122078 2152780 cri.go:89] found id: ""
	I0819 13:20:24.122170 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:24.133909 2152780 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:24.133932 2152780 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:24.133989 2152780 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:24.144440 2152780 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:24.145449 2152780 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-620467" does not appear in /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:24.146113 2152780 kubeconfig.go:62] /home/jenkins/minikube-integration/19479-2086935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-620467" cluster setting kubeconfig missing "old-k8s-version-620467" context setting]
	I0819 13:20:24.147113 2152780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:24.226327 2152780 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:24.236890 2152780 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.238
	I0819 13:20:24.236922 2152780 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:24.236936 2152780 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:24.236978 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.274144 2152780 cri.go:89] found id: ""
	I0819 13:20:24.274220 2152780 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:24.291049 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:24.302678 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:24.302699 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:24.302746 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:24.312055 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:24.312122 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:24.321518 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:24.330507 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:24.330570 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:24.340192 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.349855 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:24.349914 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.359243 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:24.369020 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:24.369100 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:24.379184 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:24.389785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:24.524009 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.159266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.413924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.519529 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.635520 2152780 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:25.635627 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:21.790800 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:21.791370 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:21.791400 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:21.791324 2154333 retry.go:31] will retry after 1.360798876s: waiting for machine to come up
	I0819 13:20:23.153492 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:23.153913 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:23.153942 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:23.153859 2154333 retry.go:31] will retry after 2.180186803s: waiting for machine to come up
	I0819 13:20:25.337305 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:25.337871 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:25.337903 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:25.337719 2154333 retry.go:31] will retry after 1.933341809s: waiting for machine to come up
	I0819 13:20:24.846640 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:27.344995 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:26.135764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:26.636618 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.136024 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.636046 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.135975 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.636411 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.136478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.636585 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.136183 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.636717 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.273276 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:27.273766 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:27.273796 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:27.273706 2154333 retry.go:31] will retry after 3.242792205s: waiting for machine to come up
	I0819 13:20:30.520205 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:30.520644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:30.520679 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:30.520585 2154333 retry.go:31] will retry after 2.921954134s: waiting for machine to come up
	I0819 13:20:29.843436 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:31.843702 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.021186 2151869 start.go:364] duration metric: took 51.034331815s to acquireMachinesLock for "no-preload-441627"
	I0819 13:20:35.021266 2151869 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:35.021275 2151869 fix.go:54] fixHost starting: 
	I0819 13:20:35.021747 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:35.021788 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:35.039025 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0819 13:20:35.039480 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:35.039951 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:20:35.039978 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:35.040398 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:35.040610 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:35.040779 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:20:35.042642 2151869 fix.go:112] recreateIfNeeded on no-preload-441627: state=Stopped err=<nil>
	I0819 13:20:35.042670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	W0819 13:20:35.042852 2151869 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:35.044620 2151869 out.go:177] * Restarting existing kvm2 VM for "no-preload-441627" ...
	I0819 13:20:31.136686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:31.636239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.136372 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.635927 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.136039 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.635721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.136156 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.636069 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.136557 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.636478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.446149 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.446796 2153523 main.go:141] libmachine: (embed-certs-823470) Found IP for machine: 192.168.61.50
	I0819 13:20:33.446825 2153523 main.go:141] libmachine: (embed-certs-823470) Reserving static IP address...
	I0819 13:20:33.446843 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has current primary IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.447326 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.447357 2153523 main.go:141] libmachine: (embed-certs-823470) Reserved static IP address: 192.168.61.50
	I0819 13:20:33.447375 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | skip adding static IP to network mk-embed-certs-823470 - found existing host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"}
	I0819 13:20:33.447390 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Getting to WaitForSSH function...
	I0819 13:20:33.447402 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting for SSH to be available...
	I0819 13:20:33.449644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.449934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.449966 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.450063 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH client type: external
	I0819 13:20:33.450080 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa (-rw-------)
	I0819 13:20:33.450128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:33.450148 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | About to run SSH command:
	I0819 13:20:33.450160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | exit 0
	I0819 13:20:33.580225 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:33.580660 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetConfigRaw
	I0819 13:20:33.581310 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.583950 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584345 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.584382 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584697 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:20:33.584932 2153523 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:33.584961 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:33.585138 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.587429 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587735 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.587768 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587903 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.588123 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588333 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588507 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.588675 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.588873 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.588886 2153523 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:33.696497 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:33.696528 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696770 2153523 buildroot.go:166] provisioning hostname "embed-certs-823470"
	I0819 13:20:33.696788 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.699605 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.699955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.699991 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.700129 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.700374 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700531 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700668 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.700822 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.701007 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.701021 2153523 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-823470 && echo "embed-certs-823470" | sudo tee /etc/hostname
	I0819 13:20:33.826398 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-823470
	
	I0819 13:20:33.826444 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.829353 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.829751 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.829791 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.830030 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.830215 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830383 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830504 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.830662 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.830879 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.830896 2153523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-823470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-823470/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-823470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:33.952843 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:33.952881 2153523 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:33.952908 2153523 buildroot.go:174] setting up certificates
	I0819 13:20:33.952920 2153523 provision.go:84] configureAuth start
	I0819 13:20:33.952930 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.953214 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.955916 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956302 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.956332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956520 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.958787 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959101 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.959128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959257 2153523 provision.go:143] copyHostCerts
	I0819 13:20:33.959321 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:33.959341 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:33.959391 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:33.959505 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:33.959516 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:33.959537 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:33.959593 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:33.959600 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:33.959618 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:33.959670 2153523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.embed-certs-823470 san=[127.0.0.1 192.168.61.50 embed-certs-823470 localhost minikube]
	I0819 13:20:34.309596 2153523 provision.go:177] copyRemoteCerts
	I0819 13:20:34.309658 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:34.309693 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.312341 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312722 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.312757 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312897 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.313100 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.313221 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.313349 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.400305 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:34.423939 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:34.447427 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:34.474337 2153523 provision.go:87] duration metric: took 521.386141ms to configureAuth
	I0819 13:20:34.474374 2153523 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:34.474559 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:34.474655 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.477282 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477721 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.477754 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.478108 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478296 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478443 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.478661 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.478864 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.478886 2153523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:34.768186 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:34.768232 2153523 machine.go:96] duration metric: took 1.183282981s to provisionDockerMachine
	I0819 13:20:34.768244 2153523 start.go:293] postStartSetup for "embed-certs-823470" (driver="kvm2")
	I0819 13:20:34.768255 2153523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:34.768273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:34.768593 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:34.768626 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.771022 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771451 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.771485 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771636 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.771820 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.771979 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.772095 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.862868 2153523 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:34.867619 2153523 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:34.867659 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:34.867746 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:34.867837 2153523 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:34.867931 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:34.877409 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:34.907227 2153523 start.go:296] duration metric: took 138.963836ms for postStartSetup
	I0819 13:20:34.907285 2153523 fix.go:56] duration metric: took 19.202241044s for fixHost
	I0819 13:20:34.907315 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.909900 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910238 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.910273 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910466 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.910671 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910832 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910952 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.911132 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.911306 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.911316 2153523 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:35.020989 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073634.971972974
	
	I0819 13:20:35.021036 2153523 fix.go:216] guest clock: 1724073634.971972974
	I0819 13:20:35.021046 2153523 fix.go:229] Guest: 2024-08-19 13:20:34.971972974 +0000 UTC Remote: 2024-08-19 13:20:34.907292264 +0000 UTC m=+174.021390970 (delta=64.68071ms)
	I0819 13:20:35.021076 2153523 fix.go:200] guest clock delta is within tolerance: 64.68071ms
	I0819 13:20:35.021082 2153523 start.go:83] releasing machines lock for "embed-certs-823470", held for 19.316080353s
	I0819 13:20:35.021111 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.021414 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:35.024146 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024613 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.024651 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024793 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025291 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025478 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025561 2153523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:35.025612 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.025736 2153523 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:35.025759 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.028550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028973 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029003 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029031 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029062 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029272 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029359 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029551 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029597 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029723 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.029741 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029893 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.137494 2153523 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:35.145726 2153523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:35.296420 2153523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:35.303481 2153523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:35.303566 2153523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:35.323781 2153523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:35.323813 2153523 start.go:495] detecting cgroup driver to use...
	I0819 13:20:35.323893 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:35.343648 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:35.357864 2153523 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:35.357920 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:35.371911 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:35.388107 2153523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:35.496871 2153523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:35.635989 2153523 docker.go:233] disabling docker service ...
	I0819 13:20:35.636061 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:35.651531 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:35.665374 2153523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:35.810969 2153523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:35.045992 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Start
	I0819 13:20:35.046198 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring networks are active...
	I0819 13:20:35.046943 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network default is active
	I0819 13:20:35.047261 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network mk-no-preload-441627 is active
	I0819 13:20:35.047846 2151869 main.go:141] libmachine: (no-preload-441627) Getting domain xml...
	I0819 13:20:35.048719 2151869 main.go:141] libmachine: (no-preload-441627) Creating domain...
	I0819 13:20:36.329569 2151869 main.go:141] libmachine: (no-preload-441627) Waiting to get IP...
	I0819 13:20:36.330479 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.330862 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.330953 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.330842 2154491 retry.go:31] will retry after 214.344142ms: waiting for machine to come up
	I0819 13:20:35.931109 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:35.945746 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:35.963978 2153523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:35.964062 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.973860 2153523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:35.973944 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.983833 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.993733 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.003724 2153523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:36.014327 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.024944 2153523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.043960 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.054444 2153523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:36.064239 2153523 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:36.064325 2153523 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:36.082508 2153523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:36.092677 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:36.216592 2153523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:36.368516 2153523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:36.368599 2153523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:36.373579 2153523 start.go:563] Will wait 60s for crictl version
	I0819 13:20:36.373630 2153523 ssh_runner.go:195] Run: which crictl
	I0819 13:20:36.377433 2153523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:36.416842 2153523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:36.416933 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.445307 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.480522 2153523 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:33.845335 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.847273 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:36.135939 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.136635 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.635748 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.136679 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.636322 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.136560 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.635772 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.135797 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.635764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.481770 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:36.485210 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485665 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:36.485697 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485956 2153523 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:36.490383 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:36.504020 2153523 kubeadm.go:883] updating cluster {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:36.504151 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:36.504236 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:36.540784 2153523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:36.540871 2153523 ssh_runner.go:195] Run: which lz4
	I0819 13:20:36.545693 2153523 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:36.550278 2153523 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:36.550307 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:20:38.008380 2153523 crio.go:462] duration metric: took 1.462725002s to copy over tarball
	I0819 13:20:38.008468 2153523 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:40.267095 2153523 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258591648s)
	I0819 13:20:40.267126 2153523 crio.go:469] duration metric: took 2.258713555s to extract the tarball
	I0819 13:20:40.267133 2153523 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:40.307080 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:40.350343 2153523 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:40.350368 2153523 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:40.350377 2153523 kubeadm.go:934] updating node { 192.168.61.50 8443 v1.31.0 crio true true} ...
	I0819 13:20:40.350504 2153523 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-823470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:40.350594 2153523 ssh_runner.go:195] Run: crio config
	I0819 13:20:40.397534 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:40.397560 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:40.397582 2153523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:40.397612 2153523 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.50 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-823470 NodeName:embed-certs-823470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:40.397804 2153523 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-823470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:40.397880 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:40.409664 2153523 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:40.409738 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:40.420526 2153523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:20:40.437016 2153523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:40.454222 2153523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 13:20:40.471532 2153523 ssh_runner.go:195] Run: grep 192.168.61.50	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:40.475436 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:40.487570 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:40.612612 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:40.630016 2153523 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470 for IP: 192.168.61.50
	I0819 13:20:40.630049 2153523 certs.go:194] generating shared ca certs ...
	I0819 13:20:40.630073 2153523 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:40.630265 2153523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:40.630329 2153523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:40.630343 2153523 certs.go:256] generating profile certs ...
	I0819 13:20:40.630457 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/client.key
	I0819 13:20:40.630553 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key.208ee791
	I0819 13:20:40.630606 2153523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key
	I0819 13:20:40.630756 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:40.630812 2153523 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:40.630824 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:40.630859 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:40.630891 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:40.630915 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:40.630959 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:40.631651 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:40.660149 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:40.699120 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:40.726336 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:40.765233 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 13:20:40.804970 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 13:20:40.846737 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:40.871358 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:40.896688 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:40.920071 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:36.547335 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.547819 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.547855 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.547769 2154491 retry.go:31] will retry after 344.710336ms: waiting for machine to come up
	I0819 13:20:36.894655 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.895213 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.895249 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.895169 2154491 retry.go:31] will retry after 301.307077ms: waiting for machine to come up
	I0819 13:20:37.197664 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.198250 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.198286 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.198182 2154491 retry.go:31] will retry after 516.927721ms: waiting for machine to come up
	I0819 13:20:37.716831 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.717323 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.717359 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.717279 2154491 retry.go:31] will retry after 679.982986ms: waiting for machine to come up
	I0819 13:20:38.398678 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:38.399120 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:38.399157 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:38.399072 2154491 retry.go:31] will retry after 822.30179ms: waiting for machine to come up
	I0819 13:20:39.223045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:39.223575 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:39.223609 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:39.223519 2154491 retry.go:31] will retry after 936.040478ms: waiting for machine to come up
	I0819 13:20:40.161047 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:40.161593 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:40.161624 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:40.161538 2154491 retry.go:31] will retry after 1.301988289s: waiting for machine to come up
	I0819 13:20:38.347577 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:40.845179 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:42.845598 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:41.136326 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:41.635949 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.136274 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.636124 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.135998 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.636101 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.136242 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.635713 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.135796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.635702 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.943162 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:40.969648 2153523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:40.988367 2153523 ssh_runner.go:195] Run: openssl version
	I0819 13:20:40.994353 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:41.006057 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010833 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010908 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.016665 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:41.029047 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:41.040384 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044690 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044755 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.050358 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:41.061626 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:41.073480 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077867 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077941 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.083685 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:41.096051 2153523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:41.100589 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:41.106791 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:41.112757 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:41.119083 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:41.125060 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:41.130528 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:41.136615 2153523 kubeadm.go:392] StartCluster: {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:41.136781 2153523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:41.136848 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.184735 2153523 cri.go:89] found id: ""
	I0819 13:20:41.184820 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:41.195134 2153523 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:41.195162 2153523 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:41.195215 2153523 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:41.205105 2153523 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:41.206197 2153523 kubeconfig.go:125] found "embed-certs-823470" server: "https://192.168.61.50:8443"
	I0819 13:20:41.208434 2153523 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:41.218028 2153523 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.50
	I0819 13:20:41.218059 2153523 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:41.218073 2153523 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:41.218128 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.256254 2153523 cri.go:89] found id: ""
	I0819 13:20:41.256344 2153523 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:41.273223 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:41.283149 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:41.283169 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:41.283228 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:41.292541 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:41.292624 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:41.302453 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:41.312032 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:41.312090 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:41.322411 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.331738 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:41.331817 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.342148 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:41.353410 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:41.353506 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:41.365035 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:41.378556 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:41.502298 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.511293 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.008951961s)
	I0819 13:20:42.511332 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.728318 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.806475 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.889810 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:42.889917 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.390790 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.890775 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.939414 2153523 api_server.go:72] duration metric: took 1.049613567s to wait for apiserver process to appear ...
	I0819 13:20:43.939456 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:43.939495 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:43.940151 2153523 api_server.go:269] stopped: https://192.168.61.50:8443/healthz: Get "https://192.168.61.50:8443/healthz": dial tcp 192.168.61.50:8443: connect: connection refused
	I0819 13:20:44.439965 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:41.465077 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:41.465533 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:41.465577 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:41.465512 2154491 retry.go:31] will retry after 1.34269663s: waiting for machine to come up
	I0819 13:20:42.810485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:42.810988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:42.811017 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:42.810935 2154491 retry.go:31] will retry after 2.118455014s: waiting for machine to come up
	I0819 13:20:44.930721 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:44.931183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:44.931225 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:44.931151 2154491 retry.go:31] will retry after 2.224941461s: waiting for machine to come up
	I0819 13:20:44.846237 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:47.344303 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:46.957397 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.957427 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:46.957459 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:46.997895 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.997932 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:47.440580 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.444945 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.444984 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:47.939553 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.946543 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.946576 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.440206 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.446075 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.446114 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.939602 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.944480 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.944526 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.440053 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.444461 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.444502 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.940036 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.944425 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.944464 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:50.440006 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:50.445299 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:20:50.451900 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:50.451930 2153523 api_server.go:131] duration metric: took 6.512455813s to wait for apiserver health ...
	I0819 13:20:50.451941 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:50.451948 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:50.453835 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:46.135706 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:46.635815 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.136558 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.636410 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.136300 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.636143 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.136463 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.635682 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.135692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.635841 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.454977 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:50.466694 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:50.484966 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:50.507697 2153523 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:50.507738 2153523 system_pods.go:61] "coredns-6f6b679f8f-rxjvv" [7eae7e32-b4c3-4941-9241-16cdde1007fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:50.507750 2153523 system_pods.go:61] "etcd-embed-certs-823470" [e2684e0e-8c4a-40da-ae5b-aa7f3a077e45] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:50.507761 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [c438fbda-c1f4-49f8-8979-1194d91cdeab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:50.507770 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [528e4b3e-b3fd-4455-bb4f-79b4bbdcb434] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:50.507778 2153523 system_pods.go:61] "kube-proxy-jcnwd" [b5302230-969d-409e-8aba-98ff971e9e79] Running
	I0819 13:20:50.507787 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [213067c8-73c6-4eb8-b9dc-f4ea981421f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:50.507798 2153523 system_pods.go:61] "metrics-server-6867b74b74-8dwpf" [7e3ab377-075e-48d8-a9ce-191ba41e3ed7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:50.507812 2153523 system_pods.go:61] "storage-provisioner" [4dd080d9-174f-4c0c-9b06-39e98a71672b] Running
	I0819 13:20:50.507821 2153523 system_pods.go:74] duration metric: took 22.833545ms to wait for pod list to return data ...
	I0819 13:20:50.507831 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:50.512822 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:50.512850 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:50.512862 2153523 node_conditions.go:105] duration metric: took 5.023718ms to run NodePressure ...
	I0819 13:20:50.512877 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:50.853595 2153523 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859172 2153523 kubeadm.go:739] kubelet initialised
	I0819 13:20:50.859200 2153523 kubeadm.go:740] duration metric: took 5.576224ms waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859211 2153523 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:50.865571 2153523 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.871631 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871652 2153523 pod_ready.go:82] duration metric: took 6.059239ms for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.871660 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871666 2153523 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.875242 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875268 2153523 pod_ready.go:82] duration metric: took 3.594665ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.875279 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875287 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.879429 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879460 2153523 pod_ready.go:82] duration metric: took 4.156713ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.879471 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879485 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.887625 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887647 2153523 pod_ready.go:82] duration metric: took 8.149859ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.887655 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887662 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:47.157196 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:47.157589 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:47.157616 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:47.157532 2154491 retry.go:31] will retry after 3.412547504s: waiting for machine to come up
	I0819 13:20:50.571896 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:50.572460 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:50.572485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:50.572401 2154491 retry.go:31] will retry after 3.498419112s: waiting for machine to come up
	I0819 13:20:49.345181 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.346188 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.136307 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.136425 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.636034 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.136443 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.636129 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.135695 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.635860 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.136481 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.635655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.287880 2153523 pod_ready.go:93] pod "kube-proxy-jcnwd" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:51.287908 2153523 pod_ready.go:82] duration metric: took 400.238614ms for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:51.287922 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:53.294631 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.803650 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:54.073302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073795 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has current primary IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073819 2151869 main.go:141] libmachine: (no-preload-441627) Found IP for machine: 192.168.50.161
	I0819 13:20:54.073833 2151869 main.go:141] libmachine: (no-preload-441627) Reserving static IP address...
	I0819 13:20:54.074302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.074337 2151869 main.go:141] libmachine: (no-preload-441627) Reserved static IP address: 192.168.50.161
	I0819 13:20:54.074356 2151869 main.go:141] libmachine: (no-preload-441627) DBG | skip adding static IP to network mk-no-preload-441627 - found existing host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"}
	I0819 13:20:54.074382 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Getting to WaitForSSH function...
	I0819 13:20:54.074398 2151869 main.go:141] libmachine: (no-preload-441627) Waiting for SSH to be available...
	I0819 13:20:54.076312 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076686 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.076719 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076879 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH client type: external
	I0819 13:20:54.076920 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa (-rw-------)
	I0819 13:20:54.076961 2151869 main.go:141] libmachine: (no-preload-441627) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:54.076975 2151869 main.go:141] libmachine: (no-preload-441627) DBG | About to run SSH command:
	I0819 13:20:54.076988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | exit 0
	I0819 13:20:54.208334 2151869 main.go:141] libmachine: (no-preload-441627) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:54.208710 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetConfigRaw
	I0819 13:20:54.209373 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.211827 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212176 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.212210 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212409 2151869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/config.json ...
	I0819 13:20:54.212587 2151869 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:54.212607 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:54.212810 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.214965 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215290 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.215315 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215435 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.215615 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215735 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215887 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.216050 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.216244 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.216256 2151869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:54.332379 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:54.332411 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332718 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:20:54.332753 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332948 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.335520 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.335930 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.335959 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.336119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.336357 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336513 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336654 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.336918 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.337084 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.337104 2151869 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-441627 && echo "no-preload-441627" | sudo tee /etc/hostname
	I0819 13:20:54.466127 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-441627
	
	I0819 13:20:54.466178 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.469045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469434 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.469482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469648 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.469836 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470018 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470174 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.470406 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.470654 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.470676 2151869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-441627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-441627/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-441627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:54.595063 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:54.595111 2151869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:54.595162 2151869 buildroot.go:174] setting up certificates
	I0819 13:20:54.595178 2151869 provision.go:84] configureAuth start
	I0819 13:20:54.595191 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.595554 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.598703 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599031 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.599060 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599202 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.601785 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602203 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.602231 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602593 2151869 provision.go:143] copyHostCerts
	I0819 13:20:54.602661 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:54.602680 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:54.602745 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:54.602878 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:54.602892 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:54.602922 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:54.603010 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:54.603022 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:54.603043 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:54.603116 2151869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.no-preload-441627 san=[127.0.0.1 192.168.50.161 localhost minikube no-preload-441627]
	I0819 13:20:55.027338 2151869 provision.go:177] copyRemoteCerts
	I0819 13:20:55.027408 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:55.027438 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.030073 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.030550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030826 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.031038 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.031231 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.031436 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.122187 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:55.147222 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:55.170341 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:55.195484 2151869 provision.go:87] duration metric: took 600.289644ms to configureAuth
	I0819 13:20:55.195516 2151869 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:55.195757 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:55.195862 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.198859 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199267 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.199294 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199448 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.199666 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199834 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199941 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.200126 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.200340 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.200357 2151869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:55.485890 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:55.485920 2151869 machine.go:96] duration metric: took 1.273319956s to provisionDockerMachine
	I0819 13:20:55.485931 2151869 start.go:293] postStartSetup for "no-preload-441627" (driver="kvm2")
	I0819 13:20:55.485941 2151869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:55.485957 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.486340 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:55.486379 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.489099 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489514 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.489550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489657 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.489866 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.490073 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.490298 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.582461 2151869 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:55.587827 2151869 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:55.587857 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:55.587928 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:55.588015 2151869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:55.588115 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:55.600309 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:55.629229 2151869 start.go:296] duration metric: took 143.280186ms for postStartSetup
	I0819 13:20:55.629281 2151869 fix.go:56] duration metric: took 20.608007215s for fixHost
	I0819 13:20:55.629306 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.632299 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632733 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.632786 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632921 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.633155 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633422 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633640 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.633850 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.634098 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.634117 2151869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:55.752998 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073655.727816487
	
	I0819 13:20:55.753028 2151869 fix.go:216] guest clock: 1724073655.727816487
	I0819 13:20:55.753039 2151869 fix.go:229] Guest: 2024-08-19 13:20:55.727816487 +0000 UTC Remote: 2024-08-19 13:20:55.629285878 +0000 UTC m=+354.230914724 (delta=98.530609ms)
	I0819 13:20:55.753080 2151869 fix.go:200] guest clock delta is within tolerance: 98.530609ms
	I0819 13:20:55.753087 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 20.731851858s
	I0819 13:20:55.753119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.753381 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:55.756443 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.756954 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.756986 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.757124 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757639 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757822 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757927 2151869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:55.757999 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.758066 2151869 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:55.758098 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.760779 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761023 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761162 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761338 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761459 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761515 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761756 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.761845 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761895 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.762041 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.762192 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.846241 2151869 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:55.868048 2151869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:56.013592 2151869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:56.020322 2151869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:56.020403 2151869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:56.045197 2151869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:56.045234 2151869 start.go:495] detecting cgroup driver to use...
	I0819 13:20:56.045325 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:56.062336 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:56.078399 2151869 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:56.078476 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:56.094735 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:56.110097 2151869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:56.238433 2151869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:56.405540 2151869 docker.go:233] disabling docker service ...
	I0819 13:20:56.405638 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:56.421604 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:56.434166 2151869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:56.565116 2151869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:56.688471 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:56.709293 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:56.727575 2151869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:56.727658 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.738492 2151869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:56.738559 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.748563 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.758372 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.768341 2151869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:56.778351 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.788072 2151869 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.805215 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.815496 2151869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:56.825033 2151869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:56.825087 2151869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:56.839732 2151869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:56.851166 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:56.976815 2151869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:57.154020 2151869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:57.154114 2151869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:57.159214 2151869 start.go:563] Will wait 60s for crictl version
	I0819 13:20:57.159282 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.163467 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:57.209305 2151869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:57.209417 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.239241 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.269905 2151869 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:53.844042 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.845906 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:56.135714 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:56.635996 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.135847 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.635970 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.136650 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.636316 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.136506 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.636057 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.136545 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.636030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.297660 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:57.297685 2153523 pod_ready.go:82] duration metric: took 6.009754619s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:57.297696 2153523 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:59.304626 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:57.270865 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:57.273851 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274158 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:57.274186 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274425 2151869 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:57.278843 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:57.296818 2151869 kubeadm.go:883] updating cluster {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:57.296988 2151869 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:57.297042 2151869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:57.345101 2151869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:57.345130 2151869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:57.345185 2151869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.345236 2151869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.345463 2151869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.345553 2151869 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.345586 2151869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.345482 2151869 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 13:20:57.345492 2151869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.345496 2151869 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.347612 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.347625 2151869 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.347661 2151869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.347698 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.347797 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.348298 2151869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.348418 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.348485 2151869 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 13:20:57.521291 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.528953 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.530354 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.538749 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.539434 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.564916 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 13:20:57.572522 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.596418 2151869 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 13:20:57.596483 2151869 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.596536 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.632436 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 13:20:57.690627 2151869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.690666 2151869 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 13:20:57.690725 2151869 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.690677 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690776 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 13:20:57.690827 2151869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.690856 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.711488 2151869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 13:20:57.711546 2151869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.711597 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842702 2151869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 13:20:57.842794 2151869 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 13:20:57.842819 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.842815 2151869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.842845 2151869 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.842917 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842923 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842933 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.842769 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.863031 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.959879 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.959965 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.960047 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.960061 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.960110 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.960298 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.994432 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.109438 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.124785 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:58.124845 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:58.126704 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:58.126791 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:58.126801 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:58.184360 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.198177 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.280314 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 13:20:58.280424 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.280471 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 13:20:58.280579 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:20:58.291181 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 13:20:58.291246 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 13:20:58.291278 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:20:58.291295 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 13:20:58.291333 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:20:58.291370 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:20:58.329433 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 13:20:58.329497 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 13:20:58.329553 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:20:58.329588 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 13:20:58.329597 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:20:58.329602 2151869 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329640 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329660 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 13:20:58.329679 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 13:20:58.329704 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 13:20:58.329747 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 13:20:58.333814 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 13:21:00.287189 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.957515939s)
	I0819 13:21:00.287247 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 13:21:00.287270 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287208 2151869 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.957596318s)
	I0819 13:21:00.287323 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287327 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 13:20:58.346708 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:00.844688 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.845156 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:01.136475 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.136128 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.636458 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.136471 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.636080 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.135958 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.636494 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.136279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.636642 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.803995 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:03.804110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:05.805252 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.037910 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.750558372s)
	I0819 13:21:02.037948 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 13:21:02.037977 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:02.038031 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:04.606941 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.568877079s)
	I0819 13:21:04.606974 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 13:21:04.607003 2151869 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:04.607057 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:05.346854 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.108643 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:06.136247 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:06.636648 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.136212 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.636710 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.136112 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.636655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.136349 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.635778 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.136689 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.635931 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.305329 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:10.805529 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.376612 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.769523227s)
	I0819 13:21:08.376661 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 13:21:08.376690 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:08.376749 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:10.337756 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.960979615s)
	I0819 13:21:10.337805 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 13:21:10.337835 2151869 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.337888 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.982386 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 13:21:10.982434 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.982483 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.345178 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.345444 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:11.135937 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:11.636273 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.136428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.635907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.135893 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.636189 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.136600 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.636483 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.135856 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.635957 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.303597 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:15.804612 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.334483 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351971561s)
	I0819 13:21:12.334524 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 13:21:12.334561 2151869 cache_images.go:123] Successfully loaded all cached images
	I0819 13:21:12.334568 2151869 cache_images.go:92] duration metric: took 14.989422621s to LoadCachedImages
	I0819 13:21:12.334580 2151869 kubeadm.go:934] updating node { 192.168.50.161 8443 v1.31.0 crio true true} ...
	I0819 13:21:12.334746 2151869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-441627 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:21:12.334860 2151869 ssh_runner.go:195] Run: crio config
	I0819 13:21:12.396639 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:12.396662 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:12.396674 2151869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:21:12.396703 2151869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.161 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-441627 NodeName:no-preload-441627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:21:12.396877 2151869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-441627"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:21:12.396958 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:21:12.408527 2151869 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:21:12.408601 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:21:12.418596 2151869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:21:12.435620 2151869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:21:12.452829 2151869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 13:21:12.470756 2151869 ssh_runner.go:195] Run: grep 192.168.50.161	control-plane.minikube.internal$ /etc/hosts
	I0819 13:21:12.474802 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:21:12.486795 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:21:12.612626 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:21:12.639005 2151869 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627 for IP: 192.168.50.161
	I0819 13:21:12.639027 2151869 certs.go:194] generating shared ca certs ...
	I0819 13:21:12.639051 2151869 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:21:12.639231 2151869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:21:12.639296 2151869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:21:12.639310 2151869 certs.go:256] generating profile certs ...
	I0819 13:21:12.639441 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/client.key
	I0819 13:21:12.639535 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key.72b03406
	I0819 13:21:12.639591 2151869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key
	I0819 13:21:12.639752 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:21:12.639811 2151869 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:21:12.639822 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:21:12.639849 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:21:12.639874 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:21:12.639899 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:21:12.639944 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:21:12.640741 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:21:12.674749 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:21:12.700466 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:21:12.726553 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:21:12.753171 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 13:21:12.779864 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:21:12.817715 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:21:12.841844 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:21:12.867122 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:21:12.896767 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:21:12.921671 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:21:12.948478 2151869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:21:12.967603 2151869 ssh_runner.go:195] Run: openssl version
	I0819 13:21:12.973848 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:21:12.984584 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989193 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989248 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.995006 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:21:13.005510 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:21:13.017919 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022858 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022918 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.029040 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:21:13.039654 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:21:13.049968 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054446 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054502 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.060941 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:21:13.071954 2151869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:21:13.076513 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:21:13.082624 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:21:13.088523 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:21:13.094597 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:21:13.100589 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:21:13.106674 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:21:13.112637 2151869 kubeadm.go:392] StartCluster: {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:21:13.112759 2151869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:21:13.112804 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.153143 2151869 cri.go:89] found id: ""
	I0819 13:21:13.153230 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:21:13.164365 2151869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:21:13.164389 2151869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:21:13.164444 2151869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:21:13.174609 2151869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:21:13.175874 2151869 kubeconfig.go:125] found "no-preload-441627" server: "https://192.168.50.161:8443"
	I0819 13:21:13.178228 2151869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:21:13.188495 2151869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.161
	I0819 13:21:13.188536 2151869 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:21:13.188562 2151869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:21:13.188623 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.225547 2151869 cri.go:89] found id: ""
	I0819 13:21:13.225635 2151869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:21:13.242484 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:21:13.251983 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:21:13.252021 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:21:13.252085 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:21:13.260912 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:21:13.260984 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:21:13.270535 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:21:13.279500 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:21:13.279549 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:21:13.288906 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.297861 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:21:13.297914 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.307468 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:21:13.316801 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:21:13.316859 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:21:13.325756 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:21:13.336403 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:13.448374 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.613650 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.165234261s)
	I0819 13:21:14.613681 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.841371 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.904812 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:15.017425 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:21:15.017541 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.518613 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.018036 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.061238 2151869 api_server.go:72] duration metric: took 1.043828546s to wait for apiserver process to appear ...
	I0819 13:21:16.061271 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:21:16.061301 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:14.846330 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:17.344355 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:19.219563 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:21:19.219607 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:21:19.219627 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.350613 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.350647 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:19.562010 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.567205 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.567236 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.062225 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.085272 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:20.085311 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.561434 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.565983 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:21:20.574754 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:21:20.574779 2151869 api_server.go:131] duration metric: took 4.513500352s to wait for apiserver health ...
	I0819 13:21:20.574790 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:20.574797 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:20.576255 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:21:16.136030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.636437 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.135718 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.635876 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.136668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.636021 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.135933 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.636638 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.135892 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.636450 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.804980 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.306777 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.577369 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:21:20.603169 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:21:20.642872 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:21:20.657328 2151869 system_pods.go:59] 8 kube-system pods found
	I0819 13:21:20.657366 2151869 system_pods.go:61] "coredns-6f6b679f8f-4w764" [486b4748-9a70-47b1-b386-e5369ebc2a96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:21:20.657373 2151869 system_pods.go:61] "etcd-no-preload-441627" [8013da47-1ecb-47de-a82c-7b72529b68fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:21:20.657383 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [72190b64-c312-48c0-8494-27e1ad39d877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:21:20.657389 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [60773a38-952f-4f53-a3f3-e54acdf7d196] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:21:20.657395 2151869 system_pods.go:61] "kube-proxy-v76kp" [ac6ad27b-5ee4-4899-8744-82537285a195] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:21:20.657400 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [78dde452-d8d5-498d-a6b1-459e6cb9ac16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:21:20.657405 2151869 system_pods.go:61] "metrics-server-6867b74b74-hd777" [2ce49c7d-af76-459a-82f5-aaa8d2b97c60] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:21:20.657410 2151869 system_pods.go:61] "storage-provisioner" [b67a7c83-46d3-4d75-bfa2-ae944b0c0ab8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:21:20.657417 2151869 system_pods.go:74] duration metric: took 14.518946ms to wait for pod list to return data ...
	I0819 13:21:20.657424 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:21:20.661337 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:21:20.661362 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:21:20.661373 2151869 node_conditions.go:105] duration metric: took 3.944656ms to run NodePressure ...
	I0819 13:21:20.661435 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:20.951411 2151869 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955734 2151869 kubeadm.go:739] kubelet initialised
	I0819 13:21:20.955755 2151869 kubeadm.go:740] duration metric: took 4.317107ms waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955763 2151869 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:21:20.961038 2151869 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.965450 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965473 2151869 pod_ready.go:82] duration metric: took 4.406672ms for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.965481 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965488 2151869 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.969586 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969610 2151869 pod_ready.go:82] duration metric: took 4.112423ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.969620 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969626 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.973542 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973566 2151869 pod_ready.go:82] duration metric: took 3.932252ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.973575 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973583 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.046350 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046390 2151869 pod_ready.go:82] duration metric: took 72.796575ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:21.046405 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046414 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:19.346285 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.346824 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.136149 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:21.635766 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.136148 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.135942 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.636570 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.135982 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.635967 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.136750 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.635977 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:25.636070 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:25.675163 2152780 cri.go:89] found id: ""
	I0819 13:21:25.675196 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.675208 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:25.675216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:25.675294 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:25.711676 2152780 cri.go:89] found id: ""
	I0819 13:21:25.711702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.711711 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:25.711717 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:25.711777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:25.748448 2152780 cri.go:89] found id: ""
	I0819 13:21:25.748479 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.748490 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:25.748499 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:25.748574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:25.786555 2152780 cri.go:89] found id: ""
	I0819 13:21:25.786584 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.786597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:25.786606 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:25.786661 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:22.804004 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:24.804708 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.447200 2151869 pod_ready.go:93] pod "kube-proxy-v76kp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:21.447230 2151869 pod_ready.go:82] duration metric: took 400.803851ms for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.447241 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:23.452136 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.454487 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:23.845593 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:26.345971 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.819620 2152780 cri.go:89] found id: ""
	I0819 13:21:25.819654 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.819665 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:25.819672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:25.819734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:25.853303 2152780 cri.go:89] found id: ""
	I0819 13:21:25.853333 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.853342 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:25.853349 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:25.853401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:25.886285 2152780 cri.go:89] found id: ""
	I0819 13:21:25.886315 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.886325 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:25.886332 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:25.886389 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:25.921320 2152780 cri.go:89] found id: ""
	I0819 13:21:25.921347 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.921356 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:25.921367 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:25.921377 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:25.975400 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:25.975486 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:25.989715 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:25.989741 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:26.115378 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:26.115402 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:26.115415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:26.192370 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:26.192415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:28.739640 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:28.752652 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:28.752729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:28.791883 2152780 cri.go:89] found id: ""
	I0819 13:21:28.791910 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.791918 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:28.791925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:28.791977 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:28.829635 2152780 cri.go:89] found id: ""
	I0819 13:21:28.829668 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.829678 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:28.829686 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:28.829756 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:28.866768 2152780 cri.go:89] found id: ""
	I0819 13:21:28.866804 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.866817 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:28.866825 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:28.866897 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:28.901921 2152780 cri.go:89] found id: ""
	I0819 13:21:28.901954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.901963 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:28.901970 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:28.902025 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:28.945421 2152780 cri.go:89] found id: ""
	I0819 13:21:28.945449 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.945458 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:28.945468 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:28.945535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:28.982857 2152780 cri.go:89] found id: ""
	I0819 13:21:28.982893 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.982905 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:28.982918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:28.982978 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:29.020378 2152780 cri.go:89] found id: ""
	I0819 13:21:29.020415 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.020427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:29.020436 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:29.020502 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:29.055782 2152780 cri.go:89] found id: ""
	I0819 13:21:29.055817 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.055829 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:29.055841 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:29.055864 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:29.107722 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:29.107761 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:29.122304 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:29.122364 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:29.193379 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:29.193415 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:29.193435 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:29.271539 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:29.271578 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:27.304409 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.306462 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:27.952426 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.954179 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:28.346305 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:30.845538 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:31.819829 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:31.832354 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:31.832427 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:31.873620 2152780 cri.go:89] found id: ""
	I0819 13:21:31.873647 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.873656 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:31.873662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:31.873722 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:31.905857 2152780 cri.go:89] found id: ""
	I0819 13:21:31.905891 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.905903 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:31.905911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:31.905993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:31.942187 2152780 cri.go:89] found id: ""
	I0819 13:21:31.942215 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.942227 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:31.942235 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:31.942307 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:31.981439 2152780 cri.go:89] found id: ""
	I0819 13:21:31.981475 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.981486 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:31.981494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:31.981569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:32.019863 2152780 cri.go:89] found id: ""
	I0819 13:21:32.019900 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.019922 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:32.019934 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:32.020000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:32.055007 2152780 cri.go:89] found id: ""
	I0819 13:21:32.055035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.055044 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:32.055050 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:32.055117 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:32.092770 2152780 cri.go:89] found id: ""
	I0819 13:21:32.092809 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.092818 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:32.092824 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:32.092880 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:32.127000 2152780 cri.go:89] found id: ""
	I0819 13:21:32.127035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.127043 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:32.127053 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:32.127067 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:32.179979 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:32.180026 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:32.195377 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:32.195412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:32.269864 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:32.269887 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:32.269902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:32.348035 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:32.348072 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:34.886647 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:34.900410 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:34.900483 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:34.938558 2152780 cri.go:89] found id: ""
	I0819 13:21:34.938588 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.938596 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:34.938603 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:34.938660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:34.975968 2152780 cri.go:89] found id: ""
	I0819 13:21:34.976001 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.976012 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:34.976020 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:34.976105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:35.008967 2152780 cri.go:89] found id: ""
	I0819 13:21:35.008995 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.009003 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:35.009010 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:35.009079 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:35.045918 2152780 cri.go:89] found id: ""
	I0819 13:21:35.045957 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.045966 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:35.045973 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:35.046038 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:35.090605 2152780 cri.go:89] found id: ""
	I0819 13:21:35.090640 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.090652 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:35.090667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:35.090740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:35.152032 2152780 cri.go:89] found id: ""
	I0819 13:21:35.152066 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.152079 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:35.152088 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:35.152182 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:35.197900 2152780 cri.go:89] found id: ""
	I0819 13:21:35.197932 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.197947 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:35.197956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:35.198026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:35.233818 2152780 cri.go:89] found id: ""
	I0819 13:21:35.233853 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.233867 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:35.233884 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:35.233902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:35.316457 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:35.316501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:35.356449 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:35.356480 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:35.409307 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:35.409347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:35.423776 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:35.423807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:35.500868 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:31.803587 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:33.803736 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.805282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:32.452623 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:34.454347 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.454772 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:35.454809 2151869 pod_ready.go:82] duration metric: took 14.007559429s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:35.454826 2151869 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:33.345607 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.347672 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.844185 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:38.001981 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:38.014908 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:38.014991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:38.051110 2152780 cri.go:89] found id: ""
	I0819 13:21:38.051141 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.051154 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:38.051162 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:38.051236 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:38.087476 2152780 cri.go:89] found id: ""
	I0819 13:21:38.087502 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.087511 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:38.087517 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:38.087575 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:38.122929 2152780 cri.go:89] found id: ""
	I0819 13:21:38.122961 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.122972 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:38.122980 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:38.123050 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:38.157352 2152780 cri.go:89] found id: ""
	I0819 13:21:38.157382 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.157394 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:38.157403 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:38.157468 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:38.194914 2152780 cri.go:89] found id: ""
	I0819 13:21:38.194953 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.194968 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:38.194974 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:38.195028 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:38.234036 2152780 cri.go:89] found id: ""
	I0819 13:21:38.234069 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.234078 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:38.234087 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:38.234155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:38.275759 2152780 cri.go:89] found id: ""
	I0819 13:21:38.275787 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.275798 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:38.275806 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:38.275867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:38.312718 2152780 cri.go:89] found id: ""
	I0819 13:21:38.312749 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.312759 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:38.312770 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:38.312785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:38.352032 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:38.352057 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:38.401877 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:38.401912 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:38.415899 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:38.415929 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:38.495576 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:38.495596 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:38.495609 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:38.304885 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.804828 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.461711 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:39.960883 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.343811 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.344371 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:41.073796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:41.095862 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:41.095935 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:41.155760 2152780 cri.go:89] found id: ""
	I0819 13:21:41.155790 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.155820 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:41.155828 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:41.155899 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:41.200108 2152780 cri.go:89] found id: ""
	I0819 13:21:41.200147 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.200177 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:41.200189 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:41.200263 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:41.233462 2152780 cri.go:89] found id: ""
	I0819 13:21:41.233494 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.233506 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:41.233514 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:41.233585 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:41.267089 2152780 cri.go:89] found id: ""
	I0819 13:21:41.267120 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.267133 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:41.267141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:41.267212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:41.302391 2152780 cri.go:89] found id: ""
	I0819 13:21:41.302423 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.302434 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:41.302443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:41.302509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:41.337812 2152780 cri.go:89] found id: ""
	I0819 13:21:41.337843 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.337855 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:41.337864 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:41.337939 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:41.373924 2152780 cri.go:89] found id: ""
	I0819 13:21:41.373954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.373961 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:41.373968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:41.374023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:41.407905 2152780 cri.go:89] found id: ""
	I0819 13:21:41.407950 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.407961 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:41.407974 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:41.407996 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:41.493650 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:41.493693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:41.532411 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:41.532456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:41.584613 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:41.584650 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:41.598337 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:41.598369 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:41.672154 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.172662 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:44.185663 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:44.185730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:44.220825 2152780 cri.go:89] found id: ""
	I0819 13:21:44.220863 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.220876 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:44.220885 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:44.220955 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:44.257261 2152780 cri.go:89] found id: ""
	I0819 13:21:44.257292 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.257300 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:44.257307 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:44.257369 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:44.292652 2152780 cri.go:89] found id: ""
	I0819 13:21:44.292690 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.292701 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:44.292709 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:44.292806 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:44.330064 2152780 cri.go:89] found id: ""
	I0819 13:21:44.330096 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.330108 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:44.330117 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:44.330184 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:44.366863 2152780 cri.go:89] found id: ""
	I0819 13:21:44.366895 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.366906 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:44.366915 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:44.366976 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:44.400335 2152780 cri.go:89] found id: ""
	I0819 13:21:44.400372 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.400385 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:44.400394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:44.400458 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:44.434381 2152780 cri.go:89] found id: ""
	I0819 13:21:44.434411 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.434422 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:44.434430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:44.434498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:44.470985 2152780 cri.go:89] found id: ""
	I0819 13:21:44.471013 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.471024 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:44.471036 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:44.471048 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:44.521552 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:44.521593 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:44.536197 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:44.536230 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:44.613040 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.613063 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:44.613077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:44.694600 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:44.694655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:42.807847 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:45.304916 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.461974 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.464590 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.843926 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.845069 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:47.231770 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:47.245480 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:47.245555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:47.282302 2152780 cri.go:89] found id: ""
	I0819 13:21:47.282334 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.282343 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:47.282351 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:47.282424 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:47.317791 2152780 cri.go:89] found id: ""
	I0819 13:21:47.317828 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.317839 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:47.317847 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:47.317914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:47.355999 2152780 cri.go:89] found id: ""
	I0819 13:21:47.356033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.356044 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:47.356052 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:47.356113 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:47.394652 2152780 cri.go:89] found id: ""
	I0819 13:21:47.394689 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.394702 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:47.394711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:47.394769 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:47.428563 2152780 cri.go:89] found id: ""
	I0819 13:21:47.428599 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.428612 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:47.428621 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:47.428698 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:47.462669 2152780 cri.go:89] found id: ""
	I0819 13:21:47.462702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.462712 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:47.462719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:47.462789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:47.496322 2152780 cri.go:89] found id: ""
	I0819 13:21:47.496349 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.496357 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:47.496366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:47.496419 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:47.527997 2152780 cri.go:89] found id: ""
	I0819 13:21:47.528033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.528045 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:47.528058 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:47.528077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:47.541514 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:47.541542 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:47.612804 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:47.612833 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:47.612851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.692815 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:47.692857 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:47.731057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:47.731092 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.286289 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:50.299376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:50.299457 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:50.335093 2152780 cri.go:89] found id: ""
	I0819 13:21:50.335123 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.335132 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:50.335138 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:50.335212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:50.369353 2152780 cri.go:89] found id: ""
	I0819 13:21:50.369378 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.369386 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:50.369394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:50.369459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:50.404319 2152780 cri.go:89] found id: ""
	I0819 13:21:50.404350 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.404360 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:50.404366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:50.404421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:50.441033 2152780 cri.go:89] found id: ""
	I0819 13:21:50.441064 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.441073 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:50.441080 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:50.441136 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:50.477438 2152780 cri.go:89] found id: ""
	I0819 13:21:50.477463 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.477471 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:50.477485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:50.477541 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:50.511785 2152780 cri.go:89] found id: ""
	I0819 13:21:50.511825 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.511838 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:50.511848 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:50.511925 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:50.545454 2152780 cri.go:89] found id: ""
	I0819 13:21:50.545487 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.545496 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:50.545502 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:50.545563 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:50.582022 2152780 cri.go:89] found id: ""
	I0819 13:21:50.582053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.582061 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:50.582071 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:50.582086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:50.625353 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:50.625384 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.688814 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:50.688851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:50.704225 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:50.704254 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:50.782736 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:50.782771 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:50.782789 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.804084 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.804223 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.961189 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:48.962466 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.344808 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.347615 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.365075 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:53.380049 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:53.380135 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:53.418911 2152780 cri.go:89] found id: ""
	I0819 13:21:53.418944 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.418956 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:53.418964 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:53.419020 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:53.458952 2152780 cri.go:89] found id: ""
	I0819 13:21:53.458986 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.458997 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:53.459005 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:53.459080 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:53.497219 2152780 cri.go:89] found id: ""
	I0819 13:21:53.497254 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.497265 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:53.497273 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:53.497360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:53.533624 2152780 cri.go:89] found id: ""
	I0819 13:21:53.533658 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.533671 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:53.533682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:53.533761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:53.567453 2152780 cri.go:89] found id: ""
	I0819 13:21:53.567490 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.567501 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:53.567509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:53.567574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:53.603438 2152780 cri.go:89] found id: ""
	I0819 13:21:53.603472 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.603482 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:53.603490 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:53.603570 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:53.637608 2152780 cri.go:89] found id: ""
	I0819 13:21:53.637648 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.637659 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:53.637667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:53.637742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:53.684476 2152780 cri.go:89] found id: ""
	I0819 13:21:53.684514 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.684524 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:53.684534 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:53.684550 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:53.699195 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:53.699227 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:53.766980 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:53.767000 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:53.767013 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:53.844890 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:53.844924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:53.888284 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:53.888318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:51.805011 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.805710 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.460867 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.462350 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.962100 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.845091 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.845602 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:57.848944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:56.442668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:56.455936 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:56.456019 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:56.494363 2152780 cri.go:89] found id: ""
	I0819 13:21:56.494403 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.494414 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:56.494423 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:56.494498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:56.531901 2152780 cri.go:89] found id: ""
	I0819 13:21:56.531933 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.531943 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:56.531950 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:56.532046 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:56.569677 2152780 cri.go:89] found id: ""
	I0819 13:21:56.569706 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.569715 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:56.569721 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:56.569787 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:56.608109 2152780 cri.go:89] found id: ""
	I0819 13:21:56.608136 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.608147 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:56.608153 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:56.608240 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:56.646753 2152780 cri.go:89] found id: ""
	I0819 13:21:56.646791 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.646803 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:56.646810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:56.646879 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:56.686029 2152780 cri.go:89] found id: ""
	I0819 13:21:56.686053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.686061 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:56.686067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:56.686131 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:56.724838 2152780 cri.go:89] found id: ""
	I0819 13:21:56.724874 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.724886 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:56.724895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:56.724963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:56.762368 2152780 cri.go:89] found id: ""
	I0819 13:21:56.762399 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.762407 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:56.762422 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:56.762434 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:56.815047 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:56.815103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:56.830783 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:56.830809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:56.908963 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:56.908990 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:56.909008 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:56.991337 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:56.991372 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:59.532005 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:59.545733 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:59.545823 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:59.584082 2152780 cri.go:89] found id: ""
	I0819 13:21:59.584111 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.584121 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:59.584130 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:59.584216 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:59.618041 2152780 cri.go:89] found id: ""
	I0819 13:21:59.618075 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.618086 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:59.618094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:59.618171 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:59.651720 2152780 cri.go:89] found id: ""
	I0819 13:21:59.651753 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.651764 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:59.651772 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:59.651844 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:59.686447 2152780 cri.go:89] found id: ""
	I0819 13:21:59.686471 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.686479 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:59.686486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:59.686539 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:59.719583 2152780 cri.go:89] found id: ""
	I0819 13:21:59.719623 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.719635 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:59.719643 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:59.719714 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:59.753559 2152780 cri.go:89] found id: ""
	I0819 13:21:59.753593 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.753604 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:59.753613 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:59.753684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:59.789518 2152780 cri.go:89] found id: ""
	I0819 13:21:59.789554 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.789566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:59.789575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:59.789647 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:59.825772 2152780 cri.go:89] found id: ""
	I0819 13:21:59.825801 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.825812 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:59.825823 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:59.825837 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:59.878857 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:59.878899 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:59.894113 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:59.894146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:59.975274 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:59.975301 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:59.975318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:00.056121 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:00.056185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:56.304106 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.804232 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.805308 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.461819 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.961828 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.345176 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.345756 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.596366 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:02.610365 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:02.610439 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:02.646703 2152780 cri.go:89] found id: ""
	I0819 13:22:02.646729 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.646737 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:02.646743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:02.646800 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:02.682311 2152780 cri.go:89] found id: ""
	I0819 13:22:02.682345 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.682357 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:02.682364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:02.682441 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:02.718494 2152780 cri.go:89] found id: ""
	I0819 13:22:02.718525 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.718545 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:02.718551 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:02.718608 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.751952 2152780 cri.go:89] found id: ""
	I0819 13:22:02.751988 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.751999 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:02.752008 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:02.752074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:02.786156 2152780 cri.go:89] found id: ""
	I0819 13:22:02.786189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.786197 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:02.786204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:02.786261 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:02.823080 2152780 cri.go:89] found id: ""
	I0819 13:22:02.823114 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.823126 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:02.823134 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:02.823207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:02.860266 2152780 cri.go:89] found id: ""
	I0819 13:22:02.860299 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.860311 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:02.860320 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:02.860377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:02.903612 2152780 cri.go:89] found id: ""
	I0819 13:22:02.903643 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.903652 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:02.903667 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:02.903680 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:02.984771 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:02.984813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:03.027346 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:03.027378 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:03.077562 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:03.077608 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:03.092135 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:03.092190 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:03.168418 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:05.669468 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:05.682640 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:05.682706 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:05.717755 2152780 cri.go:89] found id: ""
	I0819 13:22:05.717785 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.717796 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:05.717805 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:05.717867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:05.757165 2152780 cri.go:89] found id: ""
	I0819 13:22:05.757191 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.757199 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:05.757204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:05.757265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:05.794991 2152780 cri.go:89] found id: ""
	I0819 13:22:05.795023 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.795035 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:05.795044 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:05.795124 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.805919 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.304915 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.963297 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.459786 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:04.844594 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.344680 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.829634 2152780 cri.go:89] found id: ""
	I0819 13:22:05.829664 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.829673 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:05.829680 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:05.829734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:05.865739 2152780 cri.go:89] found id: ""
	I0819 13:22:05.865767 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.865775 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:05.865781 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:05.865839 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:05.901395 2152780 cri.go:89] found id: ""
	I0819 13:22:05.901425 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.901435 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:05.901443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:05.901641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:05.937162 2152780 cri.go:89] found id: ""
	I0819 13:22:05.937190 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.937200 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:05.937207 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:05.937284 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:05.974302 2152780 cri.go:89] found id: ""
	I0819 13:22:05.974332 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.974342 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:05.974353 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:05.974367 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:06.012635 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:06.012664 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:06.064093 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:06.064129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:06.077792 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:06.077825 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:06.148954 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:06.148980 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:06.148999 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:08.731539 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:08.747311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:08.747414 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:08.782881 2152780 cri.go:89] found id: ""
	I0819 13:22:08.782913 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.782921 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:08.782927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:08.782990 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:08.817533 2152780 cri.go:89] found id: ""
	I0819 13:22:08.817569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.817580 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:08.817587 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:08.817657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:08.855614 2152780 cri.go:89] found id: ""
	I0819 13:22:08.855650 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.855664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:08.855672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:08.855742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:08.896405 2152780 cri.go:89] found id: ""
	I0819 13:22:08.896439 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.896447 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:08.896455 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:08.896525 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:08.936250 2152780 cri.go:89] found id: ""
	I0819 13:22:08.936279 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.936288 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:08.936296 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:08.936366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:08.976325 2152780 cri.go:89] found id: ""
	I0819 13:22:08.976357 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.976367 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:08.976376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:08.976446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:09.012921 2152780 cri.go:89] found id: ""
	I0819 13:22:09.012953 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.012962 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:09.012967 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:09.013026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:09.051385 2152780 cri.go:89] found id: ""
	I0819 13:22:09.051420 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.051432 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:09.051444 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:09.051461 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:09.105101 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:09.105146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:09.120289 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:09.120331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:09.216182 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:09.216220 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:09.216238 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:09.302832 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:09.302876 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:07.307100 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.803522 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.461843 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.961972 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.345855 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.845869 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.848734 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:11.861754 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:11.861838 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:11.897908 2152780 cri.go:89] found id: ""
	I0819 13:22:11.897940 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.897949 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:11.897956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:11.898013 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:11.935276 2152780 cri.go:89] found id: ""
	I0819 13:22:11.935309 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.935318 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:11.935326 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:11.935386 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:11.978577 2152780 cri.go:89] found id: ""
	I0819 13:22:11.978608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.978619 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:11.978626 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:11.978697 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:12.018464 2152780 cri.go:89] found id: ""
	I0819 13:22:12.018496 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.018505 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:12.018511 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:12.018567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:12.062371 2152780 cri.go:89] found id: ""
	I0819 13:22:12.062402 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.062412 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:12.062418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:12.062476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:12.102988 2152780 cri.go:89] found id: ""
	I0819 13:22:12.103017 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.103026 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:12.103033 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:12.103088 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:12.143553 2152780 cri.go:89] found id: ""
	I0819 13:22:12.143581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.143591 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:12.143599 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:12.143660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:12.187155 2152780 cri.go:89] found id: ""
	I0819 13:22:12.187192 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.187211 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:12.187224 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:12.187251 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:12.236028 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:12.236069 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:12.250047 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:12.250084 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:12.320586 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:12.320612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:12.320624 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:12.400274 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:12.400320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:14.939735 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:14.953283 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:14.953348 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:14.993818 2152780 cri.go:89] found id: ""
	I0819 13:22:14.993851 2152780 logs.go:276] 0 containers: []
	W0819 13:22:14.993862 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:14.993870 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:14.993940 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:15.035966 2152780 cri.go:89] found id: ""
	I0819 13:22:15.035997 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.036005 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:15.036012 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:15.036073 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:15.070730 2152780 cri.go:89] found id: ""
	I0819 13:22:15.070765 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.070777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:15.070785 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:15.070855 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:15.111830 2152780 cri.go:89] found id: ""
	I0819 13:22:15.111859 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.111868 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:15.111874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:15.111928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:15.144013 2152780 cri.go:89] found id: ""
	I0819 13:22:15.144039 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.144047 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:15.144053 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:15.144107 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:15.178050 2152780 cri.go:89] found id: ""
	I0819 13:22:15.178074 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.178083 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:15.178090 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:15.178145 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:15.212622 2152780 cri.go:89] found id: ""
	I0819 13:22:15.212647 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.212656 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:15.212662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:15.212716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:15.244744 2152780 cri.go:89] found id: ""
	I0819 13:22:15.244772 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.244780 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:15.244789 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:15.244804 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:15.284232 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:15.284266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:15.336738 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:15.336774 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:15.350801 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:15.350835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:15.426424 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:15.426453 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:15.426469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:11.809160 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.303978 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.962746 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:13.964030 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.344847 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.344951 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.009087 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:18.022428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:18.022500 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:18.062900 2152780 cri.go:89] found id: ""
	I0819 13:22:18.062930 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.062941 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:18.062949 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:18.063021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:18.096711 2152780 cri.go:89] found id: ""
	I0819 13:22:18.096752 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.096764 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:18.096773 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:18.096846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:18.129199 2152780 cri.go:89] found id: ""
	I0819 13:22:18.129226 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.129246 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:18.129253 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:18.129323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:18.161907 2152780 cri.go:89] found id: ""
	I0819 13:22:18.161944 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.161955 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:18.161963 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:18.162036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:18.198514 2152780 cri.go:89] found id: ""
	I0819 13:22:18.198557 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.198570 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:18.198577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:18.198648 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:18.230114 2152780 cri.go:89] found id: ""
	I0819 13:22:18.230142 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.230152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:18.230160 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:18.230221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:18.265610 2152780 cri.go:89] found id: ""
	I0819 13:22:18.265642 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.265654 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:18.265660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:18.265729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:18.298889 2152780 cri.go:89] found id: ""
	I0819 13:22:18.298929 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.298941 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:18.298952 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:18.298967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:18.381124 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:18.381165 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:18.422361 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:18.422394 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:18.482182 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:18.482215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:18.496054 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:18.496086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:18.568459 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:16.305471 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.804592 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.461760 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.961000 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:20.961568 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.844928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.345851 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.069389 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:21.082568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:21.082636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:21.117124 2152780 cri.go:89] found id: ""
	I0819 13:22:21.117158 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.117171 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:21.117180 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:21.117272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:21.151954 2152780 cri.go:89] found id: ""
	I0819 13:22:21.151981 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.151990 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:21.151996 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:21.152047 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:21.187411 2152780 cri.go:89] found id: ""
	I0819 13:22:21.187443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.187452 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:21.187461 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:21.187528 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:21.224118 2152780 cri.go:89] found id: ""
	I0819 13:22:21.224148 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.224156 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:21.224173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:21.224247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:21.261153 2152780 cri.go:89] found id: ""
	I0819 13:22:21.261189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.261198 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:21.261205 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:21.261274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:21.305404 2152780 cri.go:89] found id: ""
	I0819 13:22:21.305433 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.305444 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:21.305453 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:21.305534 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:21.358428 2152780 cri.go:89] found id: ""
	I0819 13:22:21.358462 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.358474 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:21.358482 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:21.358561 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:21.410971 2152780 cri.go:89] found id: ""
	I0819 13:22:21.411005 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.411017 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:21.411029 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:21.411045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:21.462929 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:21.462963 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:21.476419 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:21.476446 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:21.544095 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.544116 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:21.544130 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:21.622802 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:21.622841 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.171988 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:24.185545 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:24.185624 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:24.221604 2152780 cri.go:89] found id: ""
	I0819 13:22:24.221632 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.221640 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:24.221646 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:24.221703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:24.255866 2152780 cri.go:89] found id: ""
	I0819 13:22:24.255903 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.255914 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:24.255923 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:24.255993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:24.290338 2152780 cri.go:89] found id: ""
	I0819 13:22:24.290381 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.290393 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:24.290401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:24.290465 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:24.325653 2152780 cri.go:89] found id: ""
	I0819 13:22:24.325685 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.325693 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:24.325700 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:24.325753 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:24.364081 2152780 cri.go:89] found id: ""
	I0819 13:22:24.364116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.364128 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:24.364136 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:24.364211 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:24.403464 2152780 cri.go:89] found id: ""
	I0819 13:22:24.403504 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.403516 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:24.403524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:24.403597 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:24.441205 2152780 cri.go:89] found id: ""
	I0819 13:22:24.441244 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.441256 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:24.441264 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:24.441340 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:24.478294 2152780 cri.go:89] found id: ""
	I0819 13:22:24.478324 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.478332 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:24.478341 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:24.478354 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:24.562969 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:24.563012 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.609296 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:24.609332 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:24.665537 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:24.665579 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:24.680086 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:24.680128 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:24.743465 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.305466 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.804623 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.461141 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.461784 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.844225 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.844485 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.844552 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.244137 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:27.257619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:27.257695 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:27.290229 2152780 cri.go:89] found id: ""
	I0819 13:22:27.290265 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.290274 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:27.290280 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:27.290354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:27.327185 2152780 cri.go:89] found id: ""
	I0819 13:22:27.327216 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.327225 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:27.327231 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:27.327302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:27.362949 2152780 cri.go:89] found id: ""
	I0819 13:22:27.362978 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.362999 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:27.363007 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:27.363074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:27.404085 2152780 cri.go:89] found id: ""
	I0819 13:22:27.404116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.404125 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:27.404132 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:27.404237 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:27.439884 2152780 cri.go:89] found id: ""
	I0819 13:22:27.439922 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.439933 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:27.439942 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:27.440006 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:27.473933 2152780 cri.go:89] found id: ""
	I0819 13:22:27.473961 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.473970 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:27.473976 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:27.474033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:27.510093 2152780 cri.go:89] found id: ""
	I0819 13:22:27.510121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.510131 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:27.510137 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:27.510188 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:27.549178 2152780 cri.go:89] found id: ""
	I0819 13:22:27.549205 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.549213 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:27.549223 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:27.549236 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:27.605886 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:27.605925 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:27.619506 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:27.619537 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:27.686821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:27.686849 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:27.686867 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:27.761275 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:27.761315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:30.301382 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:30.315140 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:30.315221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:30.352687 2152780 cri.go:89] found id: ""
	I0819 13:22:30.352710 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.352717 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:30.352723 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:30.352775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:30.388577 2152780 cri.go:89] found id: ""
	I0819 13:22:30.388604 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.388612 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:30.388618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:30.388671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:30.423937 2152780 cri.go:89] found id: ""
	I0819 13:22:30.423967 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.423977 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:30.423983 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:30.424039 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:30.456789 2152780 cri.go:89] found id: ""
	I0819 13:22:30.456823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.456834 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:30.456852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:30.456928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:30.495741 2152780 cri.go:89] found id: ""
	I0819 13:22:30.495778 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.495792 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:30.495801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:30.495856 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:30.532919 2152780 cri.go:89] found id: ""
	I0819 13:22:30.532947 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.532959 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:30.532972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:30.533036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:30.574556 2152780 cri.go:89] found id: ""
	I0819 13:22:30.574591 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.574602 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:30.574615 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:30.574685 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:30.610248 2152780 cri.go:89] found id: ""
	I0819 13:22:30.610280 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.610291 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:30.610303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:30.610319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:30.664302 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:30.664356 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:30.678529 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:30.678571 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:30.753412 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:30.753441 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:30.753460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:26.304403 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:28.305270 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.804282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.462817 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:29.463222 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.344268 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:32.344384 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.831754 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:30.831795 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:33.375305 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:33.390064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:33.390139 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:33.426250 2152780 cri.go:89] found id: ""
	I0819 13:22:33.426281 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.426292 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:33.426301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:33.426356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:33.464229 2152780 cri.go:89] found id: ""
	I0819 13:22:33.464255 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.464263 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:33.464270 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:33.464329 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:33.501098 2152780 cri.go:89] found id: ""
	I0819 13:22:33.501134 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.501145 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:33.501154 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:33.501223 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:33.535617 2152780 cri.go:89] found id: ""
	I0819 13:22:33.535651 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.535662 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:33.535670 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:33.535738 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:33.570553 2152780 cri.go:89] found id: ""
	I0819 13:22:33.570600 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.570613 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:33.570622 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:33.570694 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:33.605414 2152780 cri.go:89] found id: ""
	I0819 13:22:33.605454 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.605466 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:33.605477 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:33.605545 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:33.637564 2152780 cri.go:89] found id: ""
	I0819 13:22:33.637599 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.637610 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:33.637618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:33.637681 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:33.671403 2152780 cri.go:89] found id: ""
	I0819 13:22:33.671438 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.671450 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:33.671463 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:33.671482 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:33.724121 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:33.724159 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:33.737041 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:33.737070 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:33.808305 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:33.808329 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:33.808342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:33.886089 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:33.886136 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:32.806000 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:35.303503 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:31.463294 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:33.962798 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:34.344950 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.845547 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.424035 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:36.439329 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:36.439407 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:36.486348 2152780 cri.go:89] found id: ""
	I0819 13:22:36.486375 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.486383 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:36.486391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:36.486455 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:36.522796 2152780 cri.go:89] found id: ""
	I0819 13:22:36.522823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.522834 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:36.522843 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:36.522909 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:36.556528 2152780 cri.go:89] found id: ""
	I0819 13:22:36.556561 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.556573 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:36.556580 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:36.556652 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:36.592545 2152780 cri.go:89] found id: ""
	I0819 13:22:36.592581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.592591 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:36.592600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:36.592671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:36.628668 2152780 cri.go:89] found id: ""
	I0819 13:22:36.628696 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.628705 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:36.628711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:36.628765 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:36.660783 2152780 cri.go:89] found id: ""
	I0819 13:22:36.660817 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.660825 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:36.660832 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:36.660886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:36.694853 2152780 cri.go:89] found id: ""
	I0819 13:22:36.694883 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.694893 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:36.694901 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:36.694969 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:36.727767 2152780 cri.go:89] found id: ""
	I0819 13:22:36.727797 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.727805 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:36.727816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:36.727830 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:36.741032 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:36.741064 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:36.816015 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:36.816045 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:36.816062 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:36.896272 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:36.896319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:36.933895 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:36.933936 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:39.495808 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:39.511071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:39.511158 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:39.551412 2152780 cri.go:89] found id: ""
	I0819 13:22:39.551447 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.551458 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:39.551467 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:39.551523 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:39.588796 2152780 cri.go:89] found id: ""
	I0819 13:22:39.588827 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.588836 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:39.588842 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:39.588904 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:39.627115 2152780 cri.go:89] found id: ""
	I0819 13:22:39.627144 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.627153 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:39.627159 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:39.627222 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:39.662372 2152780 cri.go:89] found id: ""
	I0819 13:22:39.662405 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.662413 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:39.662420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:39.662478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:39.696239 2152780 cri.go:89] found id: ""
	I0819 13:22:39.696272 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.696282 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:39.696298 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:39.696363 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:39.731334 2152780 cri.go:89] found id: ""
	I0819 13:22:39.731372 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.731383 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:39.731391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:39.731454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:39.764368 2152780 cri.go:89] found id: ""
	I0819 13:22:39.764400 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.764412 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:39.764420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:39.764488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:39.798994 2152780 cri.go:89] found id: ""
	I0819 13:22:39.799026 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.799038 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:39.799050 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:39.799066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:39.812935 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:39.812965 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:39.882867 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:39.882898 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:39.882913 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:39.968067 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:39.968102 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:40.007122 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:40.007158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:37.304584 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.305417 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.462377 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:38.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.344771 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.355221 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:42.560811 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:42.574968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:42.575034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:42.609476 2152780 cri.go:89] found id: ""
	I0819 13:22:42.609516 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.609529 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:42.609537 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:42.609607 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:42.650763 2152780 cri.go:89] found id: ""
	I0819 13:22:42.650794 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.650804 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:42.650812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:42.650885 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:42.686330 2152780 cri.go:89] found id: ""
	I0819 13:22:42.686358 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.686369 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:42.686377 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:42.686446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:42.720937 2152780 cri.go:89] found id: ""
	I0819 13:22:42.720963 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.720971 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:42.720982 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:42.721036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:42.755079 2152780 cri.go:89] found id: ""
	I0819 13:22:42.755122 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.755132 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:42.755141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:42.755212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:42.787680 2152780 cri.go:89] found id: ""
	I0819 13:22:42.787715 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.787727 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:42.787738 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:42.787807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:42.824768 2152780 cri.go:89] found id: ""
	I0819 13:22:42.824799 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.824807 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:42.824813 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:42.824874 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:42.859902 2152780 cri.go:89] found id: ""
	I0819 13:22:42.859935 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.859946 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:42.859957 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:42.859972 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:42.911169 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:42.911215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:42.925108 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:42.925140 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:43.002506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:43.002557 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:43.002574 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:43.084408 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:43.084451 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:45.625360 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:45.638948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:45.639033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:45.675838 2152780 cri.go:89] found id: ""
	I0819 13:22:45.675874 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.675887 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:45.675895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:45.675971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:45.708726 2152780 cri.go:89] found id: ""
	I0819 13:22:45.708757 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.708768 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:45.708776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:45.708842 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:45.749704 2152780 cri.go:89] found id: ""
	I0819 13:22:45.749739 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.749752 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:45.749769 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:45.749835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:45.786426 2152780 cri.go:89] found id: ""
	I0819 13:22:45.786457 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.786469 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:45.786478 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:45.786547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:41.803829 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.804465 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.805120 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.462336 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.961582 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.845489 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.845943 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.822363 2152780 cri.go:89] found id: ""
	I0819 13:22:45.822395 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.822403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:45.822408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:45.822472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:45.856308 2152780 cri.go:89] found id: ""
	I0819 13:22:45.856337 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.856346 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:45.856353 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:45.856421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:45.893379 2152780 cri.go:89] found id: ""
	I0819 13:22:45.893417 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.893427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:45.893435 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:45.893503 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:45.929841 2152780 cri.go:89] found id: ""
	I0819 13:22:45.929868 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.929879 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:45.929890 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:45.929943 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:45.980392 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:45.980432 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:45.994781 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:45.994809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:46.065633 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:46.065658 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:46.065675 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:46.140938 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:46.140976 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.680055 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:48.692972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:48.693053 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:48.736198 2152780 cri.go:89] found id: ""
	I0819 13:22:48.736232 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.736244 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:48.736252 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:48.736332 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:48.775819 2152780 cri.go:89] found id: ""
	I0819 13:22:48.775855 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.775872 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:48.775881 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:48.775956 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:48.814090 2152780 cri.go:89] found id: ""
	I0819 13:22:48.814121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.814130 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:48.814139 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:48.814207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:48.849630 2152780 cri.go:89] found id: ""
	I0819 13:22:48.849656 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.849665 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:48.849671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:48.849730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:48.887945 2152780 cri.go:89] found id: ""
	I0819 13:22:48.887982 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.887995 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:48.888003 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:48.888072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:48.924105 2152780 cri.go:89] found id: ""
	I0819 13:22:48.924145 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.924158 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:48.924184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:48.924266 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:48.960481 2152780 cri.go:89] found id: ""
	I0819 13:22:48.960521 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.960532 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:48.960540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:48.960611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:48.998826 2152780 cri.go:89] found id: ""
	I0819 13:22:48.998856 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.998865 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:48.998874 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:48.998888 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:49.053064 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:49.053103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:49.067115 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:49.067154 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:49.146731 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:49.146753 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:49.146767 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:49.232228 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:49.232268 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.303393 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.305013 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:46.461905 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.961570 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.961743 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.344068 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.844551 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:51.775692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:51.789766 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:51.789848 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:51.830997 2152780 cri.go:89] found id: ""
	I0819 13:22:51.831029 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.831038 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:51.831045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:51.831105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:51.870122 2152780 cri.go:89] found id: ""
	I0819 13:22:51.870150 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.870160 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:51.870168 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:51.870247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:51.911170 2152780 cri.go:89] found id: ""
	I0819 13:22:51.911198 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.911206 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:51.911216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:51.911280 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:51.946159 2152780 cri.go:89] found id: ""
	I0819 13:22:51.946197 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.946218 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:51.946225 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:51.946300 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:51.981131 2152780 cri.go:89] found id: ""
	I0819 13:22:51.981160 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.981170 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:51.981176 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:51.981251 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:52.024417 2152780 cri.go:89] found id: ""
	I0819 13:22:52.024456 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.024477 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:52.024486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:52.024567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:52.062519 2152780 cri.go:89] found id: ""
	I0819 13:22:52.062552 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.062563 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:52.062571 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:52.062640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:52.106259 2152780 cri.go:89] found id: ""
	I0819 13:22:52.106291 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.106302 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:52.106314 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:52.106338 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:52.158681 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:52.158723 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:52.172489 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:52.172523 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:52.236862 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:52.236892 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:52.236910 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:52.319287 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:52.319331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:54.864061 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:54.878753 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:54.878834 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:54.913195 2152780 cri.go:89] found id: ""
	I0819 13:22:54.913225 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.913240 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:54.913247 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:54.913314 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:54.954867 2152780 cri.go:89] found id: ""
	I0819 13:22:54.954909 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.954922 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:54.954930 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:54.954995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:54.992570 2152780 cri.go:89] found id: ""
	I0819 13:22:54.992601 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.992610 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:54.992616 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:54.992683 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:55.027056 2152780 cri.go:89] found id: ""
	I0819 13:22:55.027090 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.027101 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:55.027108 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:55.027190 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:55.061941 2152780 cri.go:89] found id: ""
	I0819 13:22:55.061973 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.061985 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:55.061994 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:55.062061 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:55.100414 2152780 cri.go:89] found id: ""
	I0819 13:22:55.100443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.100455 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:55.100465 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:55.100535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:55.143797 2152780 cri.go:89] found id: ""
	I0819 13:22:55.143836 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.143848 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:55.143856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:55.143929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:55.187540 2152780 cri.go:89] found id: ""
	I0819 13:22:55.187569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.187577 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:55.187587 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:55.187600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:55.241075 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:55.241110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:55.254813 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:55.254846 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:55.325934 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:55.325955 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:55.325974 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:55.404725 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:55.404775 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:52.804423 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.303884 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:52.963783 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.461517 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:53.345242 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.346054 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.843535 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.949501 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:57.964664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:57.964730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:58.005114 2152780 cri.go:89] found id: ""
	I0819 13:22:58.005153 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.005164 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:58.005173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:58.005274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:58.038596 2152780 cri.go:89] found id: ""
	I0819 13:22:58.038629 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.038639 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:58.038644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:58.038700 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:58.081597 2152780 cri.go:89] found id: ""
	I0819 13:22:58.081631 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.081640 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:58.081650 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:58.081735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:58.120356 2152780 cri.go:89] found id: ""
	I0819 13:22:58.120389 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.120400 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:58.120408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:58.120478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:58.156313 2152780 cri.go:89] found id: ""
	I0819 13:22:58.156346 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.156358 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:58.156366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:58.156442 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:58.192198 2152780 cri.go:89] found id: ""
	I0819 13:22:58.192238 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.192249 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:58.192257 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:58.192335 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:58.226574 2152780 cri.go:89] found id: ""
	I0819 13:22:58.226608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.226622 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:58.226631 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:58.226705 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:58.260060 2152780 cri.go:89] found id: ""
	I0819 13:22:58.260088 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.260096 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:58.260106 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:58.260119 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:58.338242 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:58.338289 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:58.382245 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:58.382286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:58.436273 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:58.436314 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:58.452143 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:58.452215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:58.530133 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:57.305258 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.804142 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.462332 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.962480 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.844922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:01.030732 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:01.044083 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:01.044155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:01.083082 2152780 cri.go:89] found id: ""
	I0819 13:23:01.083114 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.083122 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:01.083129 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:01.083197 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:01.122645 2152780 cri.go:89] found id: ""
	I0819 13:23:01.122680 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.122700 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:01.122708 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:01.122777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:01.163144 2152780 cri.go:89] found id: ""
	I0819 13:23:01.163175 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.163187 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:01.163194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:01.163262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:01.201545 2152780 cri.go:89] found id: ""
	I0819 13:23:01.201587 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.201597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:01.201605 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:01.201693 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:01.237047 2152780 cri.go:89] found id: ""
	I0819 13:23:01.237075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.237086 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:01.237094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:01.237162 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:01.276531 2152780 cri.go:89] found id: ""
	I0819 13:23:01.276560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.276571 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:01.276577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:01.276633 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:01.315639 2152780 cri.go:89] found id: ""
	I0819 13:23:01.315668 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.315676 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:01.315682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:01.315734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:01.351942 2152780 cri.go:89] found id: ""
	I0819 13:23:01.351969 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.351980 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:01.351990 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:01.352006 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:01.403068 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:01.403111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:01.416608 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:01.416639 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:01.493450 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:01.493483 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:01.493501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:01.577709 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:01.577750 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:04.146865 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:04.160366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:04.160431 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:04.194463 2152780 cri.go:89] found id: ""
	I0819 13:23:04.194494 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.194504 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:04.194510 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:04.194567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:04.228327 2152780 cri.go:89] found id: ""
	I0819 13:23:04.228351 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.228359 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:04.228364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:04.228413 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:04.261613 2152780 cri.go:89] found id: ""
	I0819 13:23:04.261645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.261656 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:04.261664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:04.261740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:04.294583 2152780 cri.go:89] found id: ""
	I0819 13:23:04.294617 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.294628 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:04.294636 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:04.294712 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:04.328537 2152780 cri.go:89] found id: ""
	I0819 13:23:04.328566 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.328576 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:04.328585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:04.328657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:04.361865 2152780 cri.go:89] found id: ""
	I0819 13:23:04.361906 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.361917 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:04.361925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:04.361988 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:04.396308 2152780 cri.go:89] found id: ""
	I0819 13:23:04.396343 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.396354 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:04.396361 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:04.396433 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:04.429551 2152780 cri.go:89] found id: ""
	I0819 13:23:04.429580 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.429591 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:04.429602 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:04.429618 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:04.479557 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:04.479600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:04.493746 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:04.493781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:04.563309 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:04.563333 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:04.563347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:04.644413 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:04.644455 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:02.305047 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.804564 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.460986 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.461977 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.845028 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.345778 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.195678 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:07.209255 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:07.209323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:07.245267 2152780 cri.go:89] found id: ""
	I0819 13:23:07.245295 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.245306 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:07.245324 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:07.245394 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:07.283190 2152780 cri.go:89] found id: ""
	I0819 13:23:07.283223 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.283234 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:07.283242 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:07.283312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:07.330025 2152780 cri.go:89] found id: ""
	I0819 13:23:07.330053 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.330065 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:07.330072 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:07.330137 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:07.381649 2152780 cri.go:89] found id: ""
	I0819 13:23:07.381675 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.381684 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:07.381690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:07.381744 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:07.436759 2152780 cri.go:89] found id: ""
	I0819 13:23:07.436794 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.436806 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:07.436814 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:07.436890 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:07.475843 2152780 cri.go:89] found id: ""
	I0819 13:23:07.475876 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.475885 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:07.475891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:07.475954 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:07.510531 2152780 cri.go:89] found id: ""
	I0819 13:23:07.510560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.510569 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:07.510575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:07.510641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:07.546041 2152780 cri.go:89] found id: ""
	I0819 13:23:07.546075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.546087 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:07.546100 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:07.546118 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:07.559102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:07.559137 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:07.631002 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:07.631030 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:07.631045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:07.705711 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:07.705751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:07.745114 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:07.745148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.302515 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:10.316028 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:10.316108 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:10.357431 2152780 cri.go:89] found id: ""
	I0819 13:23:10.357468 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.357477 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:10.357483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:10.357546 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:10.393541 2152780 cri.go:89] found id: ""
	I0819 13:23:10.393568 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.393577 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:10.393583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:10.393651 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:10.429651 2152780 cri.go:89] found id: ""
	I0819 13:23:10.429684 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.429692 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:10.429698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:10.429752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:10.469626 2152780 cri.go:89] found id: ""
	I0819 13:23:10.469690 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.469701 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:10.469706 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:10.469761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:10.503015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.503048 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.503058 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:10.503065 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:10.503133 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:10.538127 2152780 cri.go:89] found id: ""
	I0819 13:23:10.538168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.538180 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:10.538188 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:10.538260 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:10.573015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.573050 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.573058 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:10.573064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:10.573134 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:10.608094 2152780 cri.go:89] found id: ""
	I0819 13:23:10.608125 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.608136 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:10.608150 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:10.608185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:10.688926 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:10.688967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:10.724554 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:10.724588 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.777619 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:10.777652 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:10.791106 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:10.791143 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:23:07.305198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.804759 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:06.961224 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.460095 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.843726 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.845072 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	W0819 13:23:10.865964 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.366960 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:13.381903 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:13.381973 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:13.420589 2152780 cri.go:89] found id: ""
	I0819 13:23:13.420620 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.420630 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:13.420637 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:13.420724 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:13.457809 2152780 cri.go:89] found id: ""
	I0819 13:23:13.457840 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.457850 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:13.457863 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:13.457927 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:13.496310 2152780 cri.go:89] found id: ""
	I0819 13:23:13.496347 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.496359 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:13.496368 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:13.496436 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:13.532057 2152780 cri.go:89] found id: ""
	I0819 13:23:13.532089 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.532097 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:13.532103 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:13.532183 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:13.575780 2152780 cri.go:89] found id: ""
	I0819 13:23:13.575814 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.575823 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:13.575829 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:13.575896 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:13.611484 2152780 cri.go:89] found id: ""
	I0819 13:23:13.611510 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.611520 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:13.611528 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:13.611594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:13.647456 2152780 cri.go:89] found id: ""
	I0819 13:23:13.647483 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.647491 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:13.647497 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:13.647569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:13.681181 2152780 cri.go:89] found id: ""
	I0819 13:23:13.681212 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.681224 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:13.681237 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:13.681266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:13.694680 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:13.694709 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:13.767546 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.767576 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:13.767597 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:13.851609 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:13.851649 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:13.893739 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:13.893778 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:12.304391 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.304840 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.962978 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.461299 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.344830 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.844720 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.446686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:16.459850 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:16.459928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:16.498671 2152780 cri.go:89] found id: ""
	I0819 13:23:16.498703 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.498712 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:16.498718 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:16.498775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:16.534655 2152780 cri.go:89] found id: ""
	I0819 13:23:16.534691 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.534704 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:16.534716 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:16.534781 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:16.571279 2152780 cri.go:89] found id: ""
	I0819 13:23:16.571317 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.571330 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:16.571337 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:16.571402 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:16.604843 2152780 cri.go:89] found id: ""
	I0819 13:23:16.604878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.604889 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:16.604897 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:16.604963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:16.641150 2152780 cri.go:89] found id: ""
	I0819 13:23:16.641181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.641190 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:16.641201 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:16.641272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:16.676737 2152780 cri.go:89] found id: ""
	I0819 13:23:16.676763 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.676773 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:16.676782 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:16.676847 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:16.709876 2152780 cri.go:89] found id: ""
	I0819 13:23:16.709908 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.709920 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:16.709928 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:16.709991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:16.745249 2152780 cri.go:89] found id: ""
	I0819 13:23:16.745277 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.745293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:16.745304 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:16.745320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:16.796364 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:16.796400 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:16.811859 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:16.811887 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:16.886118 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:16.886144 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:16.886158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.968304 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:16.968349 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.511448 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:19.523952 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:19.524023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:19.556661 2152780 cri.go:89] found id: ""
	I0819 13:23:19.556696 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.556708 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:19.556715 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:19.556784 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:19.595956 2152780 cri.go:89] found id: ""
	I0819 13:23:19.595986 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.595995 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:19.596001 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:19.596055 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:19.631993 2152780 cri.go:89] found id: ""
	I0819 13:23:19.632021 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.632030 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:19.632038 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:19.632126 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:19.667871 2152780 cri.go:89] found id: ""
	I0819 13:23:19.667924 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.667935 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:19.667944 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:19.668007 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:19.703585 2152780 cri.go:89] found id: ""
	I0819 13:23:19.703615 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.703624 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:19.703630 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:19.703684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:19.737220 2152780 cri.go:89] found id: ""
	I0819 13:23:19.737252 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.737263 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:19.737272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:19.737343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:19.775526 2152780 cri.go:89] found id: ""
	I0819 13:23:19.775570 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.775587 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:19.775595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:19.775671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:19.810611 2152780 cri.go:89] found id: ""
	I0819 13:23:19.810645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.810656 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:19.810667 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:19.810682 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.855138 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:19.855167 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:19.906988 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:19.907025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:19.920451 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:19.920481 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:19.989054 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:19.989086 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:19.989101 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.803686 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.804090 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.804422 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.463889 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.961085 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.961248 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.845050 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:21.344704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.571624 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:22.585585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:22.585667 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:22.620456 2152780 cri.go:89] found id: ""
	I0819 13:23:22.620488 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.620497 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:22.620509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:22.620562 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:22.658971 2152780 cri.go:89] found id: ""
	I0819 13:23:22.659005 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.659017 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:22.659024 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:22.659094 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:22.696236 2152780 cri.go:89] found id: ""
	I0819 13:23:22.696269 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.696281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:22.696290 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:22.696354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:22.735260 2152780 cri.go:89] found id: ""
	I0819 13:23:22.735296 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.735306 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:22.735313 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:22.735365 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.770593 2152780 cri.go:89] found id: ""
	I0819 13:23:22.770625 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.770637 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:22.770644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:22.770716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:22.806348 2152780 cri.go:89] found id: ""
	I0819 13:23:22.806375 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.806400 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:22.806418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:22.806489 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:22.843812 2152780 cri.go:89] found id: ""
	I0819 13:23:22.843841 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.843850 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:22.843858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:22.843929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:22.880607 2152780 cri.go:89] found id: ""
	I0819 13:23:22.880641 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.880655 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:22.880669 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:22.880695 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:22.930816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:22.930855 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:22.946795 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:22.946829 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:23.021065 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:23.021094 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:23.021110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:23.098083 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:23.098129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:25.641104 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:25.654226 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:25.654297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:25.689403 2152780 cri.go:89] found id: ""
	I0819 13:23:25.689429 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.689437 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:25.689445 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:25.689496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:25.725340 2152780 cri.go:89] found id: ""
	I0819 13:23:25.725380 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.725393 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:25.725401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:25.725472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:25.759133 2152780 cri.go:89] found id: ""
	I0819 13:23:25.759168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.759178 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:25.759184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:25.759253 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:25.793605 2152780 cri.go:89] found id: ""
	I0819 13:23:25.793636 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.793649 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:25.793657 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:25.793731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.804810 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.305119 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.460663 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:23.344790 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.344975 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.843772 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.828355 2152780 cri.go:89] found id: ""
	I0819 13:23:25.828391 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.828403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:25.828411 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:25.828488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:25.869621 2152780 cri.go:89] found id: ""
	I0819 13:23:25.869658 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.869669 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:25.869676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:25.869731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:25.907749 2152780 cri.go:89] found id: ""
	I0819 13:23:25.907786 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.907795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:25.907801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:25.907870 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:25.942443 2152780 cri.go:89] found id: ""
	I0819 13:23:25.942472 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.942484 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:25.942497 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:25.942515 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:26.016016 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:26.016044 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:26.016058 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:26.097427 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:26.097475 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:26.140275 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:26.140320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:26.189858 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:26.189894 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:28.705027 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:28.717999 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:28.718071 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:28.752233 2152780 cri.go:89] found id: ""
	I0819 13:23:28.752263 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.752279 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:28.752286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:28.752343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:28.786033 2152780 cri.go:89] found id: ""
	I0819 13:23:28.786071 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.786080 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:28.786086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:28.786143 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:28.821105 2152780 cri.go:89] found id: ""
	I0819 13:23:28.821135 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.821165 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:28.821173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:28.821243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:28.859179 2152780 cri.go:89] found id: ""
	I0819 13:23:28.859226 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.859237 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:28.859244 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:28.859319 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:28.894213 2152780 cri.go:89] found id: ""
	I0819 13:23:28.894250 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.894263 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:28.894272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:28.894351 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:28.927807 2152780 cri.go:89] found id: ""
	I0819 13:23:28.927836 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.927848 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:28.927856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:28.927924 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:28.965430 2152780 cri.go:89] found id: ""
	I0819 13:23:28.965464 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.965476 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:28.965485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:28.965555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:29.008894 2152780 cri.go:89] found id: ""
	I0819 13:23:29.008920 2152780 logs.go:276] 0 containers: []
	W0819 13:23:29.008928 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:29.008938 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:29.008951 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:29.089617 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:29.089644 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:29.089657 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:29.173271 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:29.173310 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:29.216057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:29.216098 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:29.267076 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:29.267131 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:27.804872 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:30.304909 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.461788 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.961548 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.844049 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.846007 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.782574 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:31.797583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:31.797655 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:31.836889 2152780 cri.go:89] found id: ""
	I0819 13:23:31.836926 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.836939 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:31.836948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:31.837021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:31.875693 2152780 cri.go:89] found id: ""
	I0819 13:23:31.875725 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.875734 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:31.875740 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:31.875813 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:31.909847 2152780 cri.go:89] found id: ""
	I0819 13:23:31.909878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.909887 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:31.909895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:31.909971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:31.944122 2152780 cri.go:89] found id: ""
	I0819 13:23:31.944155 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.944184 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:31.944193 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:31.944265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:31.979342 2152780 cri.go:89] found id: ""
	I0819 13:23:31.979373 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.979383 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:31.979390 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:31.979461 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:32.013001 2152780 cri.go:89] found id: ""
	I0819 13:23:32.013049 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.013060 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:32.013067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:32.013148 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:32.047975 2152780 cri.go:89] found id: ""
	I0819 13:23:32.048003 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.048012 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:32.048017 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:32.048074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:32.082151 2152780 cri.go:89] found id: ""
	I0819 13:23:32.082181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.082192 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:32.082205 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:32.082241 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:32.120254 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:32.120286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.171594 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:32.171634 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:32.185654 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:32.185687 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:32.251542 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:32.251568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:32.251586 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:34.831302 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:34.846902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:34.846980 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:34.883697 2152780 cri.go:89] found id: ""
	I0819 13:23:34.883731 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.883747 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:34.883756 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:34.883821 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:34.918184 2152780 cri.go:89] found id: ""
	I0819 13:23:34.918220 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.918232 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:34.918240 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:34.918321 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:34.951621 2152780 cri.go:89] found id: ""
	I0819 13:23:34.951653 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.951664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:34.951673 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:34.951735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:34.987330 2152780 cri.go:89] found id: ""
	I0819 13:23:34.987362 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.987373 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:34.987381 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:34.987454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:35.022480 2152780 cri.go:89] found id: ""
	I0819 13:23:35.022518 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.022531 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:35.022539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:35.022606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:35.059717 2152780 cri.go:89] found id: ""
	I0819 13:23:35.059753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.059769 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:35.059776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:35.059835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:35.098877 2152780 cri.go:89] found id: ""
	I0819 13:23:35.098912 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.098924 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:35.098932 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:35.098995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:35.137392 2152780 cri.go:89] found id: ""
	I0819 13:23:35.137422 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.137429 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:35.137438 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:35.137456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:35.151134 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:35.151177 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:35.216821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:35.216852 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:35.216870 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:35.300662 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:35.300717 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:35.344627 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:35.344661 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.803063 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.804202 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.962287 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.460360 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.344876 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.345734 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:37.899152 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:37.913091 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:37.913161 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:37.949684 2152780 cri.go:89] found id: ""
	I0819 13:23:37.949713 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.949722 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:37.949728 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:37.949791 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:37.984480 2152780 cri.go:89] found id: ""
	I0819 13:23:37.984509 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.984518 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:37.984524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:37.984591 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:38.022228 2152780 cri.go:89] found id: ""
	I0819 13:23:38.022264 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.022277 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:38.022285 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:38.022360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:38.056117 2152780 cri.go:89] found id: ""
	I0819 13:23:38.056147 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.056159 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:38.056184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:38.056252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:38.089157 2152780 cri.go:89] found id: ""
	I0819 13:23:38.089187 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.089204 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:38.089212 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:38.089279 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:38.123335 2152780 cri.go:89] found id: ""
	I0819 13:23:38.123369 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.123378 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:38.123384 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:38.123444 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:38.159555 2152780 cri.go:89] found id: ""
	I0819 13:23:38.159584 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.159596 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:38.159604 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:38.159671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:38.200728 2152780 cri.go:89] found id: ""
	I0819 13:23:38.200765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.200777 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:38.200789 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:38.200809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:38.251652 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:38.251684 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:38.264834 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:38.264862 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:38.341259 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:38.341287 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:38.341304 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:38.417381 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:38.417423 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:36.805360 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:39.303731 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.461458 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.962012 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.347922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.845195 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.959227 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:40.972845 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:40.972937 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:41.009180 2152780 cri.go:89] found id: ""
	I0819 13:23:41.009219 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.009233 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:41.009241 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:41.009313 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:41.044055 2152780 cri.go:89] found id: ""
	I0819 13:23:41.044088 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.044100 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:41.044109 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:41.044203 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:41.081073 2152780 cri.go:89] found id: ""
	I0819 13:23:41.081098 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.081108 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:41.081114 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:41.081166 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:41.113973 2152780 cri.go:89] found id: ""
	I0819 13:23:41.114011 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.114024 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:41.114031 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:41.114105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:41.149480 2152780 cri.go:89] found id: ""
	I0819 13:23:41.149515 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.149527 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:41.149539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:41.149632 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:41.183756 2152780 cri.go:89] found id: ""
	I0819 13:23:41.183788 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.183800 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:41.183808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:41.183884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:41.217629 2152780 cri.go:89] found id: ""
	I0819 13:23:41.217670 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.217683 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:41.217691 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:41.217755 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:41.254948 2152780 cri.go:89] found id: ""
	I0819 13:23:41.254987 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.255000 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:41.255011 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:41.255025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:41.293080 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:41.293116 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.349391 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:41.349437 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:41.363720 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:41.363751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:41.459273 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:41.459299 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:41.459315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.043239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:44.057981 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:44.058072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:44.091216 2152780 cri.go:89] found id: ""
	I0819 13:23:44.091247 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.091256 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:44.091263 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:44.091336 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:44.128729 2152780 cri.go:89] found id: ""
	I0819 13:23:44.128761 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.128772 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:44.128779 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:44.128865 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:44.167730 2152780 cri.go:89] found id: ""
	I0819 13:23:44.167765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.167777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:44.167784 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:44.167852 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:44.202323 2152780 cri.go:89] found id: ""
	I0819 13:23:44.202355 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.202366 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:44.202373 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:44.202451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:44.237030 2152780 cri.go:89] found id: ""
	I0819 13:23:44.237074 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.237088 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:44.237096 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:44.237164 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:44.273319 2152780 cri.go:89] found id: ""
	I0819 13:23:44.273356 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.273370 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:44.273378 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:44.273459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:44.313267 2152780 cri.go:89] found id: ""
	I0819 13:23:44.313300 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.313313 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:44.313322 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:44.313401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:44.353073 2152780 cri.go:89] found id: ""
	I0819 13:23:44.353099 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.353107 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:44.353115 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:44.353129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:44.366102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:44.366147 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:44.438580 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:44.438612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:44.438628 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.518200 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:44.518250 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:44.559362 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:44.559396 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.308823 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.804769 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:41.461933 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.961713 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.344606 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:45.344727 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.345821 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.114604 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:47.128989 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:47.129067 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:47.174286 2152780 cri.go:89] found id: ""
	I0819 13:23:47.174316 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.174324 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:47.174331 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:47.174391 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:47.208244 2152780 cri.go:89] found id: ""
	I0819 13:23:47.208282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.208291 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:47.208297 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:47.208360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:47.245609 2152780 cri.go:89] found id: ""
	I0819 13:23:47.245638 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.245645 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:47.245651 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:47.245703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:47.280617 2152780 cri.go:89] found id: ""
	I0819 13:23:47.280646 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.280655 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:47.280661 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:47.280726 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:47.317662 2152780 cri.go:89] found id: ""
	I0819 13:23:47.317695 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.317708 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:47.317719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:47.317789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:47.359113 2152780 cri.go:89] found id: ""
	I0819 13:23:47.359143 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.359152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:47.359158 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:47.359243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:47.396520 2152780 cri.go:89] found id: ""
	I0819 13:23:47.396550 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.396561 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:47.396568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:47.396636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:47.433043 2152780 cri.go:89] found id: ""
	I0819 13:23:47.433075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.433086 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:47.433097 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:47.433111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:47.491984 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:47.492023 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:47.505775 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:47.505805 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:47.573541 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:47.573568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:47.573584 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:47.657102 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:47.657148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.201855 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:50.216186 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:50.216250 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:50.254505 2152780 cri.go:89] found id: ""
	I0819 13:23:50.254542 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.254554 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:50.254561 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:50.254627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:50.288971 2152780 cri.go:89] found id: ""
	I0819 13:23:50.289006 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.289015 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:50.289021 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:50.289086 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:50.325243 2152780 cri.go:89] found id: ""
	I0819 13:23:50.325282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.325293 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:50.325302 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:50.325377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:50.361776 2152780 cri.go:89] found id: ""
	I0819 13:23:50.361810 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.361822 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:50.361830 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:50.361892 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:50.395380 2152780 cri.go:89] found id: ""
	I0819 13:23:50.395412 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.395422 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:50.395428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:50.395496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:50.429299 2152780 cri.go:89] found id: ""
	I0819 13:23:50.429336 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.429348 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:50.429356 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:50.429423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:50.471214 2152780 cri.go:89] found id: ""
	I0819 13:23:50.471236 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.471243 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:50.471249 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:50.471302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:50.510258 2152780 cri.go:89] found id: ""
	I0819 13:23:50.510298 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.510309 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:50.510322 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:50.510342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.547398 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:50.547431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:50.598425 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:50.598466 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:50.613308 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:50.613341 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:50.685803 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:50.685834 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:50.685851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:46.303155 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.305240 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.804319 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:46.461162 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.961356 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.961557 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:49.844368 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.345173 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:53.264007 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:53.277505 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:53.277596 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:53.320407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.320436 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.320444 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:53.320450 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:53.320508 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:53.364578 2152780 cri.go:89] found id: ""
	I0819 13:23:53.364628 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.364650 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:53.364660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:53.364732 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:53.404407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.404437 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.404446 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:53.404452 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:53.404522 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:53.440743 2152780 cri.go:89] found id: ""
	I0819 13:23:53.440775 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.440784 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:53.440791 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:53.440858 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:53.476151 2152780 cri.go:89] found id: ""
	I0819 13:23:53.476200 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.476211 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:53.476217 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:53.476297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:53.512531 2152780 cri.go:89] found id: ""
	I0819 13:23:53.512559 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.512567 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:53.512573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:53.512640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:53.562487 2152780 cri.go:89] found id: ""
	I0819 13:23:53.562524 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.562535 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:53.562543 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:53.562617 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:53.627507 2152780 cri.go:89] found id: ""
	I0819 13:23:53.627545 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.627556 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:53.627566 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:53.627587 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:53.686224 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:53.686270 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:53.700659 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:53.700693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:53.776506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:53.776544 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:53.776564 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:53.855375 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:53.855412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:53.305180 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.803826 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.962161 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:54.345531 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.845217 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.398105 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:56.411245 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:56.411312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:56.444097 2152780 cri.go:89] found id: ""
	I0819 13:23:56.444129 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.444138 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:56.444144 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:56.444214 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:56.480958 2152780 cri.go:89] found id: ""
	I0819 13:23:56.480993 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.481004 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:56.481013 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:56.481077 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:56.514657 2152780 cri.go:89] found id: ""
	I0819 13:23:56.514687 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.514696 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:56.514707 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:56.514762 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:56.548615 2152780 cri.go:89] found id: ""
	I0819 13:23:56.548647 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.548659 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:56.548667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:56.548735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:56.585801 2152780 cri.go:89] found id: ""
	I0819 13:23:56.585833 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.585844 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:56.585852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:56.585916 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:56.617804 2152780 cri.go:89] found id: ""
	I0819 13:23:56.617832 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.617843 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:56.617851 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:56.617919 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:56.655726 2152780 cri.go:89] found id: ""
	I0819 13:23:56.655753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.655761 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:56.655768 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:56.655820 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:56.691057 2152780 cri.go:89] found id: ""
	I0819 13:23:56.691087 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.691099 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:56.691111 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:56.691127 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:56.758836 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:56.758860 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:56.758879 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:56.836388 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:56.836427 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:56.875753 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:56.875781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:56.928126 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:56.928184 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.442721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:59.456581 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:59.456742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:59.492452 2152780 cri.go:89] found id: ""
	I0819 13:23:59.492480 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.492488 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:59.492494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:59.492544 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:59.527258 2152780 cri.go:89] found id: ""
	I0819 13:23:59.527291 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.527301 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:59.527311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:59.527366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:59.562712 2152780 cri.go:89] found id: ""
	I0819 13:23:59.562744 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.562757 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:59.562765 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:59.562836 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:59.597522 2152780 cri.go:89] found id: ""
	I0819 13:23:59.597557 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.597568 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:59.597577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:59.597649 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:59.638820 2152780 cri.go:89] found id: ""
	I0819 13:23:59.638852 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.638865 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:59.638873 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:59.638941 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:59.682623 2152780 cri.go:89] found id: ""
	I0819 13:23:59.682656 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.682667 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:59.682676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:59.682746 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:59.718731 2152780 cri.go:89] found id: ""
	I0819 13:23:59.718782 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.718795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:59.718808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:59.718881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:59.754799 2152780 cri.go:89] found id: ""
	I0819 13:23:59.754830 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.754840 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:59.754866 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:59.754890 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:59.807650 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:59.807683 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.821699 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:59.821727 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:59.901940 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:59.901962 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:59.901975 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:59.979019 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:59.979055 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:57.804198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:00.303807 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:57.960616 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.961190 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.344403 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:01.344800 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.519279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:02.532541 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:02.532606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:02.566287 2152780 cri.go:89] found id: ""
	I0819 13:24:02.566327 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.566338 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:02.566347 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:02.566423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:02.599874 2152780 cri.go:89] found id: ""
	I0819 13:24:02.599907 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.599920 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:02.599927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:02.600000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:02.633609 2152780 cri.go:89] found id: ""
	I0819 13:24:02.633642 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.633662 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:02.633671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:02.633741 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:02.669105 2152780 cri.go:89] found id: ""
	I0819 13:24:02.669133 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.669142 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:02.669151 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:02.669245 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:02.704719 2152780 cri.go:89] found id: ""
	I0819 13:24:02.704756 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.704769 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:02.704777 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:02.704846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:02.744694 2152780 cri.go:89] found id: ""
	I0819 13:24:02.744726 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.744736 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:02.744743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:02.744818 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.779911 2152780 cri.go:89] found id: ""
	I0819 13:24:02.779945 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.779959 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:02.779969 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:02.780034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:02.815235 2152780 cri.go:89] found id: ""
	I0819 13:24:02.815277 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.815293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:02.815303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:02.815319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:02.864822 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:02.864852 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:02.879626 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:02.879655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:02.951475 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:02.951504 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:02.951522 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:03.031154 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:03.031196 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.573907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:05.586737 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:05.586798 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:05.623912 2152780 cri.go:89] found id: ""
	I0819 13:24:05.623946 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.623957 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:05.623965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:05.624037 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:05.659399 2152780 cri.go:89] found id: ""
	I0819 13:24:05.659427 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.659435 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:05.659441 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:05.659494 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:05.693173 2152780 cri.go:89] found id: ""
	I0819 13:24:05.693207 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.693219 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:05.693228 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:05.693289 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:05.725641 2152780 cri.go:89] found id: ""
	I0819 13:24:05.725670 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.725680 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:05.725688 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:05.725752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:05.759559 2152780 cri.go:89] found id: ""
	I0819 13:24:05.759593 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.759606 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:05.759614 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:05.759686 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:05.793022 2152780 cri.go:89] found id: ""
	I0819 13:24:05.793051 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.793063 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:05.793071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:05.793147 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.304309 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.304491 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.461249 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.962079 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:03.344928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.845172 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.828997 2152780 cri.go:89] found id: ""
	I0819 13:24:05.829027 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.829037 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:05.829045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:05.829111 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:05.862852 2152780 cri.go:89] found id: ""
	I0819 13:24:05.862885 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.862895 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:05.862908 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:05.862924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.901295 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:05.901334 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:05.956106 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:05.956146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:05.971498 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:05.971525 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:06.040630 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:06.040653 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:06.040670 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:08.617393 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:08.631902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:08.631971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:08.669456 2152780 cri.go:89] found id: ""
	I0819 13:24:08.669512 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.669526 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:08.669539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:08.669630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:08.704825 2152780 cri.go:89] found id: ""
	I0819 13:24:08.704857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.704866 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:08.704874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:08.704933 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:08.738841 2152780 cri.go:89] found id: ""
	I0819 13:24:08.738879 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.738893 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:08.738902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:08.738961 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:08.775865 2152780 cri.go:89] found id: ""
	I0819 13:24:08.775899 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.775909 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:08.775918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:08.775981 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:08.813452 2152780 cri.go:89] found id: ""
	I0819 13:24:08.813482 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.813493 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:08.813501 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:08.813572 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:08.850638 2152780 cri.go:89] found id: ""
	I0819 13:24:08.850669 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.850681 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:08.850690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:08.850758 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:08.886750 2152780 cri.go:89] found id: ""
	I0819 13:24:08.886789 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.886801 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:08.886812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:08.886881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:08.922112 2152780 cri.go:89] found id: ""
	I0819 13:24:08.922145 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.922154 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:08.922164 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:08.922181 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:08.997014 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:08.997042 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:08.997059 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:09.074338 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:09.074388 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:09.114430 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:09.114460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:09.169581 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:09.169620 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:06.804761 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.809429 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:07.465846 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:09.961611 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.343646 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:10.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:12.844008 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.685453 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:11.698438 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:11.698509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:11.730814 2152780 cri.go:89] found id: ""
	I0819 13:24:11.730843 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.730852 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:11.730858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:11.730914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:11.768426 2152780 cri.go:89] found id: ""
	I0819 13:24:11.768460 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.768473 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:11.768483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:11.768555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:11.802718 2152780 cri.go:89] found id: ""
	I0819 13:24:11.802753 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.802765 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:11.802774 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:11.802841 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:11.838016 2152780 cri.go:89] found id: ""
	I0819 13:24:11.838058 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.838066 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:11.838073 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:11.838129 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:11.875869 2152780 cri.go:89] found id: ""
	I0819 13:24:11.875896 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.875904 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:11.875911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:11.875967 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:11.911538 2152780 cri.go:89] found id: ""
	I0819 13:24:11.911572 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.911580 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:11.911586 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:11.911640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:11.947762 2152780 cri.go:89] found id: ""
	I0819 13:24:11.947793 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.947803 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:11.947810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:11.947884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:11.989671 2152780 cri.go:89] found id: ""
	I0819 13:24:11.989706 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.989715 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:11.989726 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:11.989744 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:12.054707 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:12.054733 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:12.054749 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:12.137525 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:12.137572 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:12.175806 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:12.175834 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:12.227606 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:12.227651 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:14.743968 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:14.757178 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:14.757252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:14.791379 2152780 cri.go:89] found id: ""
	I0819 13:24:14.791409 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.791421 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:14.791430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:14.791510 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:14.826221 2152780 cri.go:89] found id: ""
	I0819 13:24:14.826250 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.826259 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:14.826265 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:14.826327 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:14.870239 2152780 cri.go:89] found id: ""
	I0819 13:24:14.870269 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.870281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:14.870345 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:14.870450 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:14.907473 2152780 cri.go:89] found id: ""
	I0819 13:24:14.907505 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.907515 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:14.907524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:14.907594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:14.942547 2152780 cri.go:89] found id: ""
	I0819 13:24:14.942585 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.942594 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:14.942600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:14.942658 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:14.982507 2152780 cri.go:89] found id: ""
	I0819 13:24:14.982536 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.982547 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:14.982555 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:14.982627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:15.020572 2152780 cri.go:89] found id: ""
	I0819 13:24:15.020604 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.020612 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:15.020619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:15.020691 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:15.057076 2152780 cri.go:89] found id: ""
	I0819 13:24:15.057105 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.057114 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:15.057123 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:15.057138 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:15.109111 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:15.109149 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:15.124049 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:15.124076 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:15.187413 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:15.187444 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:15.187460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:15.271009 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:15.271049 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:11.303722 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.804077 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:15.804497 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.961757 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.961949 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:14.844944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.344565 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.810205 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:17.823344 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:17.823417 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:17.862704 2152780 cri.go:89] found id: ""
	I0819 13:24:17.862734 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.862743 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:17.862750 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:17.862807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:17.900371 2152780 cri.go:89] found id: ""
	I0819 13:24:17.900404 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.900414 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:17.900420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:17.900476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:17.935518 2152780 cri.go:89] found id: ""
	I0819 13:24:17.935549 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.935559 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:17.935566 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:17.935622 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:17.972592 2152780 cri.go:89] found id: ""
	I0819 13:24:17.972624 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.972637 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:17.972647 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:17.972717 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:18.008636 2152780 cri.go:89] found id: ""
	I0819 13:24:18.008675 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.008686 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:18.008698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:18.008775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:18.040823 2152780 cri.go:89] found id: ""
	I0819 13:24:18.040857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.040866 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:18.040872 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:18.040934 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:18.079443 2152780 cri.go:89] found id: ""
	I0819 13:24:18.079472 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.079481 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:18.079488 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:18.079547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:18.118717 2152780 cri.go:89] found id: ""
	I0819 13:24:18.118746 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.118756 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:18.118766 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:18.118780 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:18.172588 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:18.172631 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:18.186427 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:18.186457 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:18.258534 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:18.258567 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:18.258585 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:18.335882 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:18.335922 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:17.804682 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.805544 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:16.461777 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:18.462005 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:20.960925 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.345299 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:21.843704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.844383 2152365 pod_ready.go:82] duration metric: took 4m0.006413479s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:22.844420 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 13:24:22.844431 2152365 pod_ready.go:39] duration metric: took 4m3.697261125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:22.844451 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:24:22.844496 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:22.844605 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:22.894620 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:22.894650 2152365 cri.go:89] found id: ""
	I0819 13:24:22.894661 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:22.894731 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.899300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:22.899365 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:22.934260 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:22.934285 2152365 cri.go:89] found id: ""
	I0819 13:24:22.934320 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:22.934380 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.939865 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:22.939927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:22.982433 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:22.982464 2152365 cri.go:89] found id: ""
	I0819 13:24:22.982475 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:22.982550 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.987038 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:22.987119 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:23.025231 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.025264 2152365 cri.go:89] found id: ""
	I0819 13:24:23.025275 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:23.025349 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.030567 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:23.030633 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:23.073663 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.073687 2152365 cri.go:89] found id: ""
	I0819 13:24:23.073695 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:23.073753 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.078563 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:23.078646 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:23.114597 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.114621 2152365 cri.go:89] found id: ""
	I0819 13:24:23.114629 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:23.114688 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.119022 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:23.119091 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:23.161263 2152365 cri.go:89] found id: ""
	I0819 13:24:23.161302 2152365 logs.go:276] 0 containers: []
	W0819 13:24:23.161311 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:23.161317 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:23.161373 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:23.211140 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.211167 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:23.211173 2152365 cri.go:89] found id: ""
	I0819 13:24:23.211182 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:23.211271 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.215875 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.220021 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:23.220046 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.283965 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:23.284004 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:20.878687 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:20.891965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:20.892042 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:20.926029 2152780 cri.go:89] found id: ""
	I0819 13:24:20.926057 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.926070 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:20.926077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:20.926142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:20.961025 2152780 cri.go:89] found id: ""
	I0819 13:24:20.961056 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.961067 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:20.961077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:20.961142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:20.993767 2152780 cri.go:89] found id: ""
	I0819 13:24:20.993800 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.993813 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:20.993821 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:20.993886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:21.025250 2152780 cri.go:89] found id: ""
	I0819 13:24:21.025285 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.025294 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:21.025301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:21.025356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:21.057832 2152780 cri.go:89] found id: ""
	I0819 13:24:21.057863 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.057871 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:21.057877 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:21.057930 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:21.092372 2152780 cri.go:89] found id: ""
	I0819 13:24:21.092400 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.092408 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:21.092419 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:21.092486 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:21.126138 2152780 cri.go:89] found id: ""
	I0819 13:24:21.126177 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.126187 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:21.126194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:21.126262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:21.157779 2152780 cri.go:89] found id: ""
	I0819 13:24:21.157806 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.157814 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:21.157823 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:21.157835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:21.240776 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:21.240813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:21.278705 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:21.278742 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:21.331086 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:21.331125 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:21.346399 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:21.346431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:21.424929 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:23.925325 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:23.938286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:23.938380 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:23.974038 2152780 cri.go:89] found id: ""
	I0819 13:24:23.974070 2152780 logs.go:276] 0 containers: []
	W0819 13:24:23.974079 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:23.974086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:23.974144 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:24.007828 2152780 cri.go:89] found id: ""
	I0819 13:24:24.007865 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.007876 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:24.007883 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:24.007951 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:24.042152 2152780 cri.go:89] found id: ""
	I0819 13:24:24.042182 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.042192 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:24.042198 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:24.042252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:24.076334 2152780 cri.go:89] found id: ""
	I0819 13:24:24.076366 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.076376 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:24.076382 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:24.076451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:24.116333 2152780 cri.go:89] found id: ""
	I0819 13:24:24.116368 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.116380 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:24.116388 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:24.116449 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:24.157126 2152780 cri.go:89] found id: ""
	I0819 13:24:24.157154 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.157163 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:24.157170 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:24.157224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:24.197124 2152780 cri.go:89] found id: ""
	I0819 13:24:24.197153 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.197161 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:24.197167 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:24.197224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:24.232662 2152780 cri.go:89] found id: ""
	I0819 13:24:24.232690 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.232698 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:24.232709 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:24.232721 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.310535 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.310573 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.351436 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:24.351469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:24.409022 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.409061 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.423392 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.423426 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:24.499638 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:22.305024 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:24.305228 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.961886 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:25.460596 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.000428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:27.016949 2152780 kubeadm.go:597] duration metric: took 4m2.883005064s to restartPrimaryControlPlane
	W0819 13:24:27.017029 2152780 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:27.017066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:27.478402 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:27.494351 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:24:27.505001 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:24:27.515444 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:24:27.515467 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:24:27.515523 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:24:27.524646 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:24:27.524697 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:24:27.535343 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:24:27.544402 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:24:27.544456 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:24:27.554655 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.564636 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:24:27.564709 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.575078 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:24:27.584856 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:24:27.584924 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:24:27.595099 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:24:27.675533 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:24:27.675638 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:24:27.827613 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:24:27.827770 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:24:27.827924 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:24:28.032127 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:24:23.323805 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:23.323834 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:23.400053 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:23.400107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:23.438155 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:23.438196 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.483737 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:23.483772 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:23.536000 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:23.536042 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.574211 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:23.574258 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.620141 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:23.620203 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.130515 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.130556 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.185924 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.185959 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.202306 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.202348 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:24.342635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:24.342675 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.895433 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:26.911884 2152365 api_server.go:72] duration metric: took 4m15.522363036s to wait for apiserver process to appear ...
	I0819 13:24:26.911914 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:24:26.911959 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:26.912026 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:26.949386 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.949423 2152365 cri.go:89] found id: ""
	I0819 13:24:26.949436 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:26.949509 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.954854 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:26.954937 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:26.993702 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:26.993732 2152365 cri.go:89] found id: ""
	I0819 13:24:26.993746 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:26.993806 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.997800 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:26.997873 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:27.044488 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:27.044521 2152365 cri.go:89] found id: ""
	I0819 13:24:27.044532 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:27.044612 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.049411 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:27.049488 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:27.098304 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.098328 2152365 cri.go:89] found id: ""
	I0819 13:24:27.098336 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:27.098388 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.102393 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:27.102448 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:27.138927 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.138952 2152365 cri.go:89] found id: ""
	I0819 13:24:27.138960 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:27.139019 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.143058 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:27.143113 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:27.183816 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:27.183846 2152365 cri.go:89] found id: ""
	I0819 13:24:27.183856 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:27.183918 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.188818 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:27.188880 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:27.231255 2152365 cri.go:89] found id: ""
	I0819 13:24:27.231286 2152365 logs.go:276] 0 containers: []
	W0819 13:24:27.231294 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:27.231300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:27.231352 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:27.265165 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.265189 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:27.265194 2152365 cri.go:89] found id: ""
	I0819 13:24:27.265204 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:27.265267 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.269294 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.273771 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:27.273796 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.315407 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:27.315446 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.359309 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:27.359343 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:27.817261 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:27.817316 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:27.834635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:27.834676 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:27.888137 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:27.888194 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:27.940367 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:27.940402 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.981404 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:27.981448 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:28.026666 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:28.026702 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:28.084387 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:28.084419 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:28.160615 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:28.160667 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:28.276547 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:28.276584 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:28.033928 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:24:28.034048 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:24:28.034129 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:24:28.034243 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:24:28.034320 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:24:28.034497 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:24:28.034852 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:24:28.035397 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:24:28.035736 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:24:28.036257 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:24:28.036850 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:24:28.037014 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:24:28.037102 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:24:28.386155 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:24:28.571457 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:24:28.835239 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:24:28.967421 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:24:28.990941 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:24:28.991082 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:24:28.991148 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:24:29.149979 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:24:29.151782 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:24:29.151943 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:24:29.161108 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:24:29.161209 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:24:29.161316 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:24:29.172129 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:24:26.804321 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.304110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.461339 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.462208 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:28.316651 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:28.316701 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:30.876679 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:24:30.882671 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:24:30.883817 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:24:30.883842 2152365 api_server.go:131] duration metric: took 3.971920414s to wait for apiserver health ...
	I0819 13:24:30.883850 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:24:30.883876 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:30.883927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:30.921225 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:30.921255 2152365 cri.go:89] found id: ""
	I0819 13:24:30.921266 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:30.921336 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.925961 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:30.926033 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:30.963834 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:30.963861 2152365 cri.go:89] found id: ""
	I0819 13:24:30.963872 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:30.963947 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.968020 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:30.968085 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:31.004690 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.004717 2152365 cri.go:89] found id: ""
	I0819 13:24:31.004726 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:31.004784 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.008770 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:31.008832 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:31.045412 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.045440 2152365 cri.go:89] found id: ""
	I0819 13:24:31.045451 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:31.045508 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.049682 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:31.049759 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:31.090853 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.090946 2152365 cri.go:89] found id: ""
	I0819 13:24:31.090958 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:31.091037 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.095245 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:31.095311 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:31.138661 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.138693 2152365 cri.go:89] found id: ""
	I0819 13:24:31.138705 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:31.138775 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.143216 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:31.143283 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:31.186554 2152365 cri.go:89] found id: ""
	I0819 13:24:31.186580 2152365 logs.go:276] 0 containers: []
	W0819 13:24:31.186589 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:31.186595 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:31.186645 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:31.226218 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.226257 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:31.226261 2152365 cri.go:89] found id: ""
	I0819 13:24:31.226270 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:31.226333 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.230602 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.234269 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:31.234290 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:31.299071 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:31.299107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.336594 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:31.336633 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:31.745390 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:31.745441 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:31.797446 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:31.797479 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.838882 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:31.838916 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.876830 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:31.876871 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.935426 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:31.935461 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.975182 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:31.975212 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:31.989631 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:31.989663 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:32.093154 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:32.093190 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:32.142869 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:32.142906 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:32.179008 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:32.179043 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:34.726569 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:24:34.726604 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.726609 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.726613 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.726619 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.726622 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.726625 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.726633 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.726636 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.726644 2152365 system_pods.go:74] duration metric: took 3.84278789s to wait for pod list to return data ...
	I0819 13:24:34.726654 2152365 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:24:34.729710 2152365 default_sa.go:45] found service account: "default"
	I0819 13:24:34.729745 2152365 default_sa.go:55] duration metric: took 3.082291ms for default service account to be created ...
	I0819 13:24:34.729758 2152365 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:24:34.734698 2152365 system_pods.go:86] 8 kube-system pods found
	I0819 13:24:34.734734 2152365 system_pods.go:89] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.734744 2152365 system_pods.go:89] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.734752 2152365 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.734758 2152365 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.734765 2152365 system_pods.go:89] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.734768 2152365 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.734775 2152365 system_pods.go:89] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.734784 2152365 system_pods.go:89] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.734792 2152365 system_pods.go:126] duration metric: took 5.027633ms to wait for k8s-apps to be running ...
	I0819 13:24:34.734802 2152365 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:24:34.734863 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:34.756925 2152365 system_svc.go:56] duration metric: took 22.110614ms WaitForService to wait for kubelet
	I0819 13:24:34.756964 2152365 kubeadm.go:582] duration metric: took 4m23.367451416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:24:34.757007 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:24:34.760465 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:24:34.760497 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:24:34.760510 2152365 node_conditions.go:105] duration metric: took 3.498566ms to run NodePressure ...
	I0819 13:24:34.760524 2152365 start.go:241] waiting for startup goroutines ...
	I0819 13:24:34.760531 2152365 start.go:246] waiting for cluster config update ...
	I0819 13:24:34.760552 2152365 start.go:255] writing updated cluster config ...
	I0819 13:24:34.760888 2152365 ssh_runner.go:195] Run: rm -f paused
	I0819 13:24:34.814139 2152365 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:24:34.815906 2152365 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-038572" cluster and "default" namespace by default
	I0819 13:24:31.305059 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.804408 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.804797 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:31.462264 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.961404 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.961719 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.305696 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.804500 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.460369 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.462138 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.804768 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.804803 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.462412 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.961368 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:46.804973 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.304762 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:47.460607 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.461589 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.804815 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:54.304854 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.961120 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:53.962098 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:55.962219 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:56.804112 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:57.298511 2153523 pod_ready.go:82] duration metric: took 4m0.000785617s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:57.298548 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:24:57.298569 2153523 pod_ready.go:39] duration metric: took 4m6.439346081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:57.298603 2153523 kubeadm.go:597] duration metric: took 4m16.103432347s to restartPrimaryControlPlane
	W0819 13:24:57.298676 2153523 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:57.298707 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:58.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:00.961514 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:03.460885 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:05.461736 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:09.173035 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:25:09.173672 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:09.173913 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:07.962080 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:10.461273 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:14.174468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:14.174714 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:12.961725 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:15.461361 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:17.461731 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:19.961772 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:23.413286 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.114546859s)
	I0819 13:25:23.413389 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:23.440190 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:25:23.450707 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:25:23.462984 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:25:23.463003 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:25:23.463046 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:25:23.472381 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:25:23.472437 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:25:23.482090 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:25:23.491789 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:25:23.491845 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:25:23.501722 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.510879 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:25:23.510936 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.520464 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:25:23.529582 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:25:23.529642 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:25:23.539089 2153523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:25:23.585240 2153523 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:25:23.585319 2153523 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:25:23.712867 2153523 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:25:23.712994 2153523 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:25:23.713118 2153523 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:25:23.722519 2153523 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:25:23.724354 2153523 out.go:235]   - Generating certificates and keys ...
	I0819 13:25:23.724436 2153523 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:25:23.724517 2153523 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:25:23.724598 2153523 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:25:23.724648 2153523 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:25:23.724702 2153523 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:25:23.724745 2153523 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:25:23.724795 2153523 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:25:23.724847 2153523 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:25:23.724904 2153523 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:25:23.725051 2153523 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:25:23.725125 2153523 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:25:23.725238 2153523 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:25:23.779977 2153523 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:25:23.930648 2153523 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:25:24.124057 2153523 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:25:24.314884 2153523 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:25:24.494351 2153523 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:25:24.495024 2153523 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:25:24.497778 2153523 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:25:24.175179 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:24.175432 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:24.499543 2153523 out.go:235]   - Booting up control plane ...
	I0819 13:25:24.499671 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:25:24.499762 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:25:24.499864 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:25:24.520379 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:25:24.532777 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:25:24.532853 2153523 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:25:24.666433 2153523 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:25:24.666580 2153523 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:25:25.168265 2153523 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.973317ms
	I0819 13:25:25.168373 2153523 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:25:21.961823 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:24.462014 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:30.169290 2153523 kubeadm.go:310] [api-check] The API server is healthy after 5.001301539s
	I0819 13:25:30.182039 2153523 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:25:30.198499 2153523 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:25:30.224273 2153523 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:25:30.224513 2153523 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-823470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:25:30.236397 2153523 kubeadm.go:310] [bootstrap-token] Using token: uoaia1.lo1g4dbd1uwo2t3t
	I0819 13:25:30.237712 2153523 out.go:235]   - Configuring RBAC rules ...
	I0819 13:25:30.237846 2153523 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:25:30.245121 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:25:30.254416 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:25:30.257701 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:25:30.262104 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:25:30.266592 2153523 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:25:30.577306 2153523 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:25:26.961302 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:28.962130 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:31.051204 2153523 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:25:31.576457 2153523 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:25:31.577557 2153523 kubeadm.go:310] 
	I0819 13:25:31.577644 2153523 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:25:31.577653 2153523 kubeadm.go:310] 
	I0819 13:25:31.577728 2153523 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:25:31.577738 2153523 kubeadm.go:310] 
	I0819 13:25:31.577786 2153523 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:25:31.577872 2153523 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:25:31.577949 2153523 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:25:31.577959 2153523 kubeadm.go:310] 
	I0819 13:25:31.578041 2153523 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:25:31.578051 2153523 kubeadm.go:310] 
	I0819 13:25:31.578111 2153523 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:25:31.578120 2153523 kubeadm.go:310] 
	I0819 13:25:31.578174 2153523 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:25:31.578266 2153523 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:25:31.578354 2153523 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:25:31.578383 2153523 kubeadm.go:310] 
	I0819 13:25:31.578524 2153523 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:25:31.578624 2153523 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:25:31.578641 2153523 kubeadm.go:310] 
	I0819 13:25:31.578765 2153523 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.578886 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:25:31.578922 2153523 kubeadm.go:310] 	--control-plane 
	I0819 13:25:31.578931 2153523 kubeadm.go:310] 
	I0819 13:25:31.579045 2153523 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:25:31.579055 2153523 kubeadm.go:310] 
	I0819 13:25:31.579151 2153523 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.579277 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:25:31.580242 2153523 kubeadm.go:310] W0819 13:25:23.534803    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580611 2153523 kubeadm.go:310] W0819 13:25:23.535422    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580758 2153523 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:25:31.580793 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:25:31.580803 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:25:31.582322 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:25:31.583520 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:25:31.594967 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:25:31.613491 2153523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:25:31.613590 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-823470 minikube.k8s.io/updated_at=2024_08_19T13_25_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=embed-certs-823470 minikube.k8s.io/primary=true
	I0819 13:25:31.613591 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.644190 2153523 ops.go:34] apiserver oom_adj: -16
	I0819 13:25:31.824270 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.324994 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.824955 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.325186 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.824797 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.325222 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.824306 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.325248 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.825154 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.461327 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:33.461401 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:35.455588 2151869 pod_ready.go:82] duration metric: took 4m0.000741427s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	E0819 13:25:35.455630 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:25:35.455654 2151869 pod_ready.go:39] duration metric: took 4m14.499879559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:35.455689 2151869 kubeadm.go:597] duration metric: took 4m22.29129413s to restartPrimaryControlPlane
	W0819 13:25:35.455763 2151869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:25:35.455798 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:25:36.324463 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:36.464343 2153523 kubeadm.go:1113] duration metric: took 4.850819101s to wait for elevateKubeSystemPrivileges
	I0819 13:25:36.464399 2153523 kubeadm.go:394] duration metric: took 4m55.327792066s to StartCluster
	I0819 13:25:36.464441 2153523 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.464553 2153523 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:25:36.466407 2153523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.466694 2153523 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:25:36.466786 2153523 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:25:36.466883 2153523 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-823470"
	I0819 13:25:36.466903 2153523 addons.go:69] Setting default-storageclass=true in profile "embed-certs-823470"
	I0819 13:25:36.466908 2153523 addons.go:69] Setting metrics-server=true in profile "embed-certs-823470"
	I0819 13:25:36.466924 2153523 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-823470"
	W0819 13:25:36.466935 2153523 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:25:36.466943 2153523 addons.go:234] Setting addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:36.466949 2153523 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-823470"
	W0819 13:25:36.466961 2153523 addons.go:243] addon metrics-server should already be in state true
	I0819 13:25:36.466968 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:25:36.466987 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.466998 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.467388 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467421 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467422 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467424 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.468449 2153523 out.go:177] * Verifying Kubernetes components...
	I0819 13:25:36.469946 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:25:36.484726 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0819 13:25:36.484740 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I0819 13:25:36.484824 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0819 13:25:36.485224 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485280 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485303 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485822 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485841 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485824 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485881 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485956 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485982 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.486206 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486260 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486267 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486382 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.486787 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486803 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486819 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.486836 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.490147 2153523 addons.go:234] Setting addon default-storageclass=true in "embed-certs-823470"
	W0819 13:25:36.490169 2153523 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:25:36.490198 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.490599 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.490628 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.506245 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
	I0819 13:25:36.506573 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0819 13:25:36.506955 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507024 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507467 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507485 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507599 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507619 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507896 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.507899 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.508055 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.508657 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.508682 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.509546 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.511452 2153523 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:25:36.512660 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:25:36.512680 2153523 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:25:36.512702 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.515934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516401 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.516422 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516608 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.516810 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.517007 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.517160 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.517459 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0819 13:25:36.517826 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.518316 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.518339 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.518689 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.518859 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.520336 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.521729 2153523 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:25:36.522775 2153523 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.522793 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:25:36.522813 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.525839 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526467 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.526505 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526736 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.527191 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.527384 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.527540 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.527918 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0819 13:25:36.528354 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.528868 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.528890 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.529243 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.529418 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.530881 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.531125 2153523 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.531138 2153523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:25:36.531151 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.534168 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534579 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.534603 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534703 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.534855 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.534995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.535105 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.723562 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:25:36.740697 2153523 node_ready.go:35] waiting up to 6m0s for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749268 2153523 node_ready.go:49] node "embed-certs-823470" has status "Ready":"True"
	I0819 13:25:36.749294 2153523 node_ready.go:38] duration metric: took 8.560448ms for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749306 2153523 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:36.756208 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:36.829190 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.845456 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:25:36.845487 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:25:36.857828 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.928557 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:25:36.928590 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:25:36.979832 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:36.979864 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:25:37.086105 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:37.498852 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.498909 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498924 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499189 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499210 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499218 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499225 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499393 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499392 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499405 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499419 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499426 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499762 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499774 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499771 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499761 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499815 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499835 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510580 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.510601 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.510894 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.510922 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510924 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.721765 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.721791 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722119 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722143 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722171 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.722180 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722502 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722513 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722525 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722538 2153523 addons.go:475] Verifying addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:37.724029 2153523 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:25:37.725177 2153523 addons.go:510] duration metric: took 1.258399079s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:25:38.763132 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:40.765558 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:44.175962 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:44.176268 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:43.263762 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:45.765551 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:46.265160 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.265189 2153523 pod_ready.go:82] duration metric: took 9.50894907s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.265202 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270706 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.270733 2153523 pod_ready.go:82] duration metric: took 5.521974ms for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270745 2153523 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275061 2153523 pod_ready.go:93] pod "etcd-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.275095 2153523 pod_ready.go:82] duration metric: took 4.331701ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275106 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280251 2153523 pod_ready.go:93] pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.280269 2153523 pod_ready.go:82] duration metric: took 5.157209ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280278 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284437 2153523 pod_ready.go:93] pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.284456 2153523 pod_ready.go:82] duration metric: took 4.173028ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284464 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660549 2153523 pod_ready.go:93] pod "kube-proxy-cn9sl" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.660577 2153523 pod_ready.go:82] duration metric: took 376.106459ms for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660588 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060405 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:47.060432 2153523 pod_ready.go:82] duration metric: took 399.837833ms for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060440 2153523 pod_ready.go:39] duration metric: took 10.311121311s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:47.060458 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:25:47.060513 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:25:47.076832 2153523 api_server.go:72] duration metric: took 10.610087948s to wait for apiserver process to appear ...
	I0819 13:25:47.076869 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:25:47.076894 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:25:47.081037 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:25:47.081960 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:25:47.081984 2153523 api_server.go:131] duration metric: took 5.106291ms to wait for apiserver health ...
	I0819 13:25:47.081993 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:25:47.264787 2153523 system_pods.go:59] 9 kube-system pods found
	I0819 13:25:47.264817 2153523 system_pods.go:61] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.264822 2153523 system_pods.go:61] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.264826 2153523 system_pods.go:61] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.264830 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.264835 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.264839 2153523 system_pods.go:61] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.264842 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.264849 2153523 system_pods.go:61] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.264853 2153523 system_pods.go:61] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.264863 2153523 system_pods.go:74] duration metric: took 182.863608ms to wait for pod list to return data ...
	I0819 13:25:47.264870 2153523 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:25:47.461938 2153523 default_sa.go:45] found service account: "default"
	I0819 13:25:47.461968 2153523 default_sa.go:55] duration metric: took 197.091755ms for default service account to be created ...
	I0819 13:25:47.461978 2153523 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:25:47.664070 2153523 system_pods.go:86] 9 kube-system pods found
	I0819 13:25:47.664101 2153523 system_pods.go:89] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.664106 2153523 system_pods.go:89] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.664110 2153523 system_pods.go:89] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.664115 2153523 system_pods.go:89] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.664119 2153523 system_pods.go:89] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.664122 2153523 system_pods.go:89] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.664125 2153523 system_pods.go:89] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.664131 2153523 system_pods.go:89] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.664135 2153523 system_pods.go:89] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.664146 2153523 system_pods.go:126] duration metric: took 202.162032ms to wait for k8s-apps to be running ...
	I0819 13:25:47.664157 2153523 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:25:47.664259 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:47.680000 2153523 system_svc.go:56] duration metric: took 15.832426ms WaitForService to wait for kubelet
	I0819 13:25:47.680036 2153523 kubeadm.go:582] duration metric: took 11.213303025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:25:47.680063 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:25:47.861730 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:25:47.861762 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:25:47.861774 2153523 node_conditions.go:105] duration metric: took 181.706418ms to run NodePressure ...
	I0819 13:25:47.861786 2153523 start.go:241] waiting for startup goroutines ...
	I0819 13:25:47.861793 2153523 start.go:246] waiting for cluster config update ...
	I0819 13:25:47.861804 2153523 start.go:255] writing updated cluster config ...
	I0819 13:25:47.862066 2153523 ssh_runner.go:195] Run: rm -f paused
	I0819 13:25:47.913161 2153523 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:25:47.915029 2153523 out.go:177] * Done! kubectl is now configured to use "embed-certs-823470" cluster and "default" namespace by default
	I0819 13:26:01.591066 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.135241188s)
	I0819 13:26:01.591140 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:01.620641 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:26:01.633612 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:01.647057 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:01.647085 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:01.647144 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:01.663839 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:01.663903 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:01.681570 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:01.692967 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:01.693038 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:01.714743 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.723974 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:01.724044 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.734044 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:01.743276 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:01.743326 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:01.752586 2151869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:01.798481 2151869 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:26:01.798564 2151869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:01.909646 2151869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:01.909801 2151869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:01.909923 2151869 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:26:01.923525 2151869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:01.925556 2151869 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:01.925670 2151869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:01.925760 2151869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:01.925900 2151869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:01.925991 2151869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:01.926102 2151869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:01.926162 2151869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:01.926248 2151869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:01.926334 2151869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:01.926449 2151869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:01.926567 2151869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:01.926632 2151869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:01.926707 2151869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:02.049009 2151869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:02.310249 2151869 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:26:02.425638 2151869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:02.664112 2151869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:03.165747 2151869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:03.166413 2151869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:03.169075 2151869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:03.170768 2151869 out.go:235]   - Booting up control plane ...
	I0819 13:26:03.170893 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:03.171003 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:03.171184 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:03.190081 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:03.197988 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:03.198058 2151869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:03.327052 2151869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:26:03.327180 2151869 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:26:04.332623 2151869 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002268296s
	I0819 13:26:04.332744 2151869 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:26:09.332759 2151869 kubeadm.go:310] [api-check] The API server is healthy after 5.001673024s
	I0819 13:26:09.344645 2151869 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:26:09.359806 2151869 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:26:09.393264 2151869 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:26:09.393499 2151869 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-441627 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:26:09.413036 2151869 kubeadm.go:310] [bootstrap-token] Using token: cva6zd.b3089a5fajgqfd6r
	I0819 13:26:09.414921 2151869 out.go:235]   - Configuring RBAC rules ...
	I0819 13:26:09.415128 2151869 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:26:09.420470 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:26:09.431168 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:26:09.438188 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:26:09.443406 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:26:09.450497 2151869 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:26:09.737815 2151869 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:26:10.165062 2151869 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:26:10.738017 2151869 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:26:10.739202 2151869 kubeadm.go:310] 
	I0819 13:26:10.739309 2151869 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:26:10.739321 2151869 kubeadm.go:310] 
	I0819 13:26:10.739415 2151869 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:26:10.739426 2151869 kubeadm.go:310] 
	I0819 13:26:10.739469 2151869 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:26:10.739558 2151869 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:26:10.739645 2151869 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:26:10.739662 2151869 kubeadm.go:310] 
	I0819 13:26:10.739746 2151869 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:26:10.739758 2151869 kubeadm.go:310] 
	I0819 13:26:10.739833 2151869 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:26:10.739842 2151869 kubeadm.go:310] 
	I0819 13:26:10.739926 2151869 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:26:10.740039 2151869 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:26:10.740116 2151869 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:26:10.740134 2151869 kubeadm.go:310] 
	I0819 13:26:10.740272 2151869 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:26:10.740376 2151869 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:26:10.740387 2151869 kubeadm.go:310] 
	I0819 13:26:10.740487 2151869 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.740651 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:26:10.740689 2151869 kubeadm.go:310] 	--control-plane 
	I0819 13:26:10.740699 2151869 kubeadm.go:310] 
	I0819 13:26:10.740808 2151869 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:26:10.740822 2151869 kubeadm.go:310] 
	I0819 13:26:10.740931 2151869 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.741086 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:26:10.743032 2151869 kubeadm.go:310] W0819 13:26:01.774566    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743353 2151869 kubeadm.go:310] W0819 13:26:01.775291    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743487 2151869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:10.743516 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:26:10.743530 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:26:10.745175 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:26:10.746234 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:26:10.756423 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:26:10.774266 2151869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:26:10.774332 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:10.774349 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-441627 minikube.k8s.io/updated_at=2024_08_19T13_26_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=no-preload-441627 minikube.k8s.io/primary=true
	I0819 13:26:10.800516 2151869 ops.go:34] apiserver oom_adj: -16
	I0819 13:26:10.991369 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.492114 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.992301 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.491433 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.991812 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.492015 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.991704 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.492129 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.992304 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:15.079535 2151869 kubeadm.go:1113] duration metric: took 4.305270284s to wait for elevateKubeSystemPrivileges
	I0819 13:26:15.079607 2151869 kubeadm.go:394] duration metric: took 5m1.9669805s to StartCluster
	I0819 13:26:15.079635 2151869 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.079759 2151869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:26:15.081957 2151869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.082240 2151869 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:26:15.082282 2151869 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:26:15.082374 2151869 addons.go:69] Setting storage-provisioner=true in profile "no-preload-441627"
	I0819 13:26:15.082413 2151869 addons.go:234] Setting addon storage-provisioner=true in "no-preload-441627"
	I0819 13:26:15.082409 2151869 addons.go:69] Setting metrics-server=true in profile "no-preload-441627"
	W0819 13:26:15.082425 2151869 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:26:15.082389 2151869 addons.go:69] Setting default-storageclass=true in profile "no-preload-441627"
	I0819 13:26:15.082449 2151869 addons.go:234] Setting addon metrics-server=true in "no-preload-441627"
	I0819 13:26:15.082461 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	W0819 13:26:15.082463 2151869 addons.go:243] addon metrics-server should already be in state true
	I0819 13:26:15.082479 2151869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-441627"
	I0819 13:26:15.082503 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.082514 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082869 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082920 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082925 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082947 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.083691 2151869 out.go:177] * Verifying Kubernetes components...
	I0819 13:26:15.085493 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:26:15.101376 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
	I0819 13:26:15.102091 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.102685 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.102703 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.103169 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.103181 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0819 13:26:15.103193 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0819 13:26:15.103361 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.103663 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.103732 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.104300 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104322 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.104710 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104730 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.105081 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105409 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105905 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.105938 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.106084 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.106119 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.107248 2151869 addons.go:234] Setting addon default-storageclass=true in "no-preload-441627"
	W0819 13:26:15.107271 2151869 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:26:15.107302 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.107654 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.107713 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.122709 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0819 13:26:15.123128 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.123869 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.123898 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.124416 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.124634 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.126217 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I0819 13:26:15.126791 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.127400 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.127419 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.127698 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.128022 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.128336 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.129232 2151869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:26:15.130058 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.130226 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:26:15.130243 2151869 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:26:15.130262 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.131335 2151869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:26:15.131652 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45447
	I0819 13:26:15.132227 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.132570 2151869 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.132586 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:26:15.132605 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.133517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134218 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.134244 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134600 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.134738 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.134755 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.134852 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.135030 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.135097 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.135284 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.135673 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.135702 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.136748 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137177 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.137197 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137442 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.137613 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.137765 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.137924 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.154937 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0819 13:26:15.155407 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.155835 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.155847 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.156238 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.156416 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.158575 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.158795 2151869 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.158805 2151869 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:26:15.158818 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.162078 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162519 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.162534 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162825 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.162977 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.163114 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.163198 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.309062 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:26:15.341525 2151869 node_ready.go:35] waiting up to 6m0s for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354270 2151869 node_ready.go:49] node "no-preload-441627" has status "Ready":"True"
	I0819 13:26:15.354307 2151869 node_ready.go:38] duration metric: took 12.742932ms for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354320 2151869 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:15.363385 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:15.427138 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.429228 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.517567 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:26:15.517597 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:26:15.626356 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:26:15.626393 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:26:15.677045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677072 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677414 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677471 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.677483 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677490 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677729 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677749 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.683924 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.683943 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.684224 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.684249 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.684252 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:15.701949 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:15.701980 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:26:15.743920 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:16.110007 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110035 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110393 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110415 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.110426 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110439 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110691 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110712 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426076 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426617 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.426639 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426650 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426660 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426669 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426948 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426994 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.427011 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.427024 2151869 addons.go:475] Verifying addon metrics-server=true in "no-preload-441627"
	I0819 13:26:16.428916 2151869 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 13:26:16.430020 2151869 addons.go:510] duration metric: took 1.347734507s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 13:26:17.369079 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:19.369662 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:21.369721 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:23.368863 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.368886 2151869 pod_ready.go:82] duration metric: took 8.005467546s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.368895 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373851 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.373873 2151869 pod_ready.go:82] duration metric: took 4.971429ms for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373884 2151869 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378381 2151869 pod_ready.go:93] pod "etcd-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.378405 2151869 pod_ready.go:82] duration metric: took 4.513601ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378417 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382700 2151869 pod_ready.go:93] pod "kube-apiserver-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.382719 2151869 pod_ready.go:82] duration metric: took 4.295583ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382728 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387145 2151869 pod_ready.go:93] pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.387168 2151869 pod_ready.go:82] duration metric: took 4.434097ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387179 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768694 2151869 pod_ready.go:93] pod "kube-proxy-9jhss" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.768722 2151869 pod_ready.go:82] duration metric: took 381.534902ms for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768735 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167861 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:24.167888 2151869 pod_ready.go:82] duration metric: took 399.144976ms for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167896 2151869 pod_ready.go:39] duration metric: took 8.81356327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:24.167913 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:26:24.167967 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:26:24.187913 2151869 api_server.go:72] duration metric: took 9.105632249s to wait for apiserver process to appear ...
	I0819 13:26:24.187938 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:26:24.187960 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:26:24.192758 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:26:24.193797 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:26:24.193822 2151869 api_server.go:131] duration metric: took 5.87693ms to wait for apiserver health ...
	I0819 13:26:24.193833 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:26:24.371261 2151869 system_pods.go:59] 9 kube-system pods found
	I0819 13:26:24.371291 2151869 system_pods.go:61] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.371296 2151869 system_pods.go:61] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.371300 2151869 system_pods.go:61] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.371304 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.371308 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.371312 2151869 system_pods.go:61] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.371317 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.371325 2151869 system_pods.go:61] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.371331 2151869 system_pods.go:61] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.371342 2151869 system_pods.go:74] duration metric: took 177.503046ms to wait for pod list to return data ...
	I0819 13:26:24.371355 2151869 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:26:24.568039 2151869 default_sa.go:45] found service account: "default"
	I0819 13:26:24.568068 2151869 default_sa.go:55] duration metric: took 196.702869ms for default service account to be created ...
	I0819 13:26:24.568078 2151869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:26:24.771705 2151869 system_pods.go:86] 9 kube-system pods found
	I0819 13:26:24.771734 2151869 system_pods.go:89] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.771741 2151869 system_pods.go:89] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.771745 2151869 system_pods.go:89] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.771749 2151869 system_pods.go:89] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.771753 2151869 system_pods.go:89] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.771756 2151869 system_pods.go:89] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.771759 2151869 system_pods.go:89] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.771765 2151869 system_pods.go:89] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.771770 2151869 system_pods.go:89] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.771781 2151869 system_pods.go:126] duration metric: took 203.697331ms to wait for k8s-apps to be running ...
	I0819 13:26:24.771794 2151869 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:26:24.771837 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.788088 2151869 system_svc.go:56] duration metric: took 16.285619ms WaitForService to wait for kubelet
	I0819 13:26:24.788122 2151869 kubeadm.go:582] duration metric: took 9.705841529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:26:24.788157 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:26:24.968341 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:26:24.968377 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:26:24.968392 2151869 node_conditions.go:105] duration metric: took 180.211418ms to run NodePressure ...
	I0819 13:26:24.968409 2151869 start.go:241] waiting for startup goroutines ...
	I0819 13:26:24.968418 2151869 start.go:246] waiting for cluster config update ...
	I0819 13:26:24.968429 2151869 start.go:255] writing updated cluster config ...
	I0819 13:26:24.968747 2151869 ssh_runner.go:195] Run: rm -f paused
	I0819 13:26:25.025595 2151869 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:26:25.027536 2151869 out.go:177] * Done! kubectl is now configured to use "no-preload-441627" cluster and "default" namespace by default
	I0819 13:26:24.177907 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:26:24.178201 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:26:24.178214 2152780 kubeadm.go:310] 
	I0819 13:26:24.178291 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:26:24.178372 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:26:24.178382 2152780 kubeadm.go:310] 
	I0819 13:26:24.178424 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:26:24.178476 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:26:24.178656 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:26:24.178674 2152780 kubeadm.go:310] 
	I0819 13:26:24.178792 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:26:24.178834 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:26:24.178879 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:26:24.178885 2152780 kubeadm.go:310] 
	I0819 13:26:24.179039 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:26:24.179150 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:26:24.179160 2152780 kubeadm.go:310] 
	I0819 13:26:24.179343 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:26:24.179495 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:26:24.179612 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:26:24.179737 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:26:24.179753 2152780 kubeadm.go:310] 
	I0819 13:26:24.180394 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:24.180518 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:26:24.180601 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 13:26:24.180756 2152780 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 13:26:24.180807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:26:24.684112 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.697957 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:24.707579 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:24.707600 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:24.707649 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:24.717570 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:24.717642 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:24.727069 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:24.735692 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:24.735757 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:24.744522 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.753036 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:24.753090 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.763455 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:24.772780 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:24.772839 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:24.782898 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:24.856625 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:26:24.856732 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:25.003965 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:25.004118 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:25.004294 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:26:25.205556 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:25.207347 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:25.207462 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:25.207553 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:25.207671 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:25.207759 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:25.207840 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:25.207910 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:25.207978 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:25.208063 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:25.208526 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:25.209357 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:25.209668 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:25.209816 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:25.424000 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:25.667776 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:25.723895 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:25.817349 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:25.834327 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:25.836860 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:25.837016 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:25.992443 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:25.994085 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:26:25.994223 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:26.007880 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:26.009553 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:26.013310 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:26.014264 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:27:06.016907 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:27:06.017042 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:06.017274 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:11.017468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:11.017758 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:21.018328 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:21.018585 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:41.019394 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:41.019632 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018489 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:28:21.018784 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018807 2152780 kubeadm.go:310] 
	I0819 13:28:21.018879 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:28:21.018963 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:28:21.018971 2152780 kubeadm.go:310] 
	I0819 13:28:21.019000 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:28:21.019030 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:28:21.019122 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:28:21.019130 2152780 kubeadm.go:310] 
	I0819 13:28:21.019233 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:28:21.019276 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:28:21.019329 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:28:21.019337 2152780 kubeadm.go:310] 
	I0819 13:28:21.019471 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:28:21.019607 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:28:21.019620 2152780 kubeadm.go:310] 
	I0819 13:28:21.019776 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:28:21.019895 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:28:21.020016 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:28:21.020125 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:28:21.020139 2152780 kubeadm.go:310] 
	I0819 13:28:21.020943 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:28:21.021058 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:28:21.021158 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 13:28:21.021292 2152780 kubeadm.go:394] duration metric: took 7m56.942541209s to StartCluster
	I0819 13:28:21.021375 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:28:21.021456 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:28:21.068572 2152780 cri.go:89] found id: ""
	I0819 13:28:21.068613 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.068624 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:28:21.068633 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:28:21.068704 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:28:21.108484 2152780 cri.go:89] found id: ""
	I0819 13:28:21.108514 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.108532 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:28:21.108540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:28:21.108611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:28:21.151863 2152780 cri.go:89] found id: ""
	I0819 13:28:21.151894 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.151905 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:28:21.151914 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:28:21.151992 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:28:21.191844 2152780 cri.go:89] found id: ""
	I0819 13:28:21.191873 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.191883 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:28:21.191891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:28:21.191970 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:28:21.230545 2152780 cri.go:89] found id: ""
	I0819 13:28:21.230580 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.230588 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:28:21.230595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:28:21.230650 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:28:21.264157 2152780 cri.go:89] found id: ""
	I0819 13:28:21.264211 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.264225 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:28:21.264233 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:28:21.264301 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:28:21.297523 2152780 cri.go:89] found id: ""
	I0819 13:28:21.297558 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.297566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:28:21.297573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:28:21.297630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:28:21.336765 2152780 cri.go:89] found id: ""
	I0819 13:28:21.336793 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.336802 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:28:21.336815 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:28:21.336839 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:28:21.387668 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:28:21.387711 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:28:21.402131 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:28:21.402173 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:28:21.473587 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:28:21.473615 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:28:21.473632 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:28:21.576567 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:28:21.576610 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 13:28:21.614859 2152780 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 13:28:21.614919 2152780 out.go:270] * 
	W0819 13:28:21.615011 2152780 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.615032 2152780 out.go:270] * 
	W0819 13:28:21.615837 2152780 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:28:21.619494 2152780 out.go:201] 
	W0819 13:28:21.620743 2152780 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.620793 2152780 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 13:28:21.620813 2152780 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 13:28:21.622143 2152780 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.903306471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074646903259858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=173f262e-95cb-4fe9-9d0d-c29fa9406da1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.904226071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c9d6fd3-f7b2-491f-82c0-6b9d89717119 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.904318813Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c9d6fd3-f7b2-491f-82c0-6b9d89717119 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.904382727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5c9d6fd3-f7b2-491f-82c0-6b9d89717119 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.936886373Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39e768fd-e234-4af4-9bd7-63bdbbb61ab9 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.936981122Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39e768fd-e234-4af4-9bd7-63bdbbb61ab9 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.938307123Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=552ed27e-f66e-4964-aa55-e37dafda1be2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.938703199Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074646938670546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=552ed27e-f66e-4964-aa55-e37dafda1be2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.939655593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bca53ef0-b10f-4d7c-93bf-c1d74a0bec33 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.939728979Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bca53ef0-b10f-4d7c-93bf-c1d74a0bec33 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.939768860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bca53ef0-b10f-4d7c-93bf-c1d74a0bec33 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.973901742Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d14b138-2915-46ce-9ddc-4751e3566a24 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.973990713Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d14b138-2915-46ce-9ddc-4751e3566a24 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.974978996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36034d12-a2ea-4508-ac43-cd8143823aa3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.975467174Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074646975445197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36034d12-a2ea-4508-ac43-cd8143823aa3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.975922505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38487c3b-093e-4b29-8faa-a2d53c94ae88 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.975988197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38487c3b-093e-4b29-8faa-a2d53c94ae88 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:37:26 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:26.976028417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=38487c3b-093e-4b29-8faa-a2d53c94ae88 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:37:27 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:27.015545347Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ae890c2-3627-4ef3-b3a8-fcd79e16b060 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:37:27 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:27.015649628Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ae890c2-3627-4ef3-b3a8-fcd79e16b060 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:37:27 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:27.017077961Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ae62792-532a-42a8-94c9-3335cc742e1b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:37:27 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:27.017550480Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074647017529402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ae62792-532a-42a8-94c9-3335cc742e1b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:37:27 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:27.018258077Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=347d11cf-2a8a-47d6-86b9-815004900875 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:37:27 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:27.018310752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=347d11cf-2a8a-47d6-86b9-815004900875 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:37:27 old-k8s-version-620467 crio[650]: time="2024-08-19 13:37:27.018340946Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=347d11cf-2a8a-47d6-86b9-815004900875 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug19 13:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052662] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041462] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.891894] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.505316] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.613375] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.587338] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.059255] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064534] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.190720] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.158664] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.305924] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.524185] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.059446] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.840054] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	[ +13.606049] kauditd_printk_skb: 46 callbacks suppressed
	[Aug19 13:24] systemd-fstab-generator[5082]: Ignoring "noauto" option for root device
	[Aug19 13:26] systemd-fstab-generator[5362]: Ignoring "noauto" option for root device
	[  +0.067230] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:37:27 up 17 min,  0 users,  load average: 0.18, 0.06, 0.03
	Linux old-k8s-version-620467 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 19 13:37:21 old-k8s-version-620467 kubelet[6539]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Aug 19 13:37:21 old-k8s-version-620467 kubelet[6539]: goroutine 143 [runnable]:
	Aug 19 13:37:21 old-k8s-version-620467 kubelet[6539]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0001a1340)
	Aug 19 13:37:21 old-k8s-version-620467 kubelet[6539]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Aug 19 13:37:21 old-k8s-version-620467 kubelet[6539]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 19 13:37:21 old-k8s-version-620467 kubelet[6539]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Aug 19 13:37:21 old-k8s-version-620467 kubelet[6539]: goroutine 144 [select]:
	Aug 19 13:37:21 old-k8s-version-620467 kubelet[6539]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0008cd220, 0xc000201b01, 0xc000c4b580, 0xc000411090, 0xc000cd2e00, 0xc000cd2dc0)
	Aug 19 13:37:21 old-k8s-version-620467 kubelet[6539]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Aug 19 13:37:21 old-k8s-version-620467 kubelet[6539]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000201b60, 0x0, 0x0)
	Aug 19 13:37:21 old-k8s-version-620467 kubelet[6539]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Aug 19 13:37:21 old-k8s-version-620467 kubelet[6539]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0001a1340)
	Aug 19 13:37:21 old-k8s-version-620467 kubelet[6539]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Aug 19 13:37:21 old-k8s-version-620467 kubelet[6539]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 19 13:37:21 old-k8s-version-620467 kubelet[6539]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Aug 19 13:37:21 old-k8s-version-620467 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 19 13:37:21 old-k8s-version-620467 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 19 13:37:22 old-k8s-version-620467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Aug 19 13:37:22 old-k8s-version-620467 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 19 13:37:22 old-k8s-version-620467 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 19 13:37:22 old-k8s-version-620467 kubelet[6547]: I0819 13:37:22.630484    6547 server.go:416] Version: v1.20.0
	Aug 19 13:37:22 old-k8s-version-620467 kubelet[6547]: I0819 13:37:22.630823    6547 server.go:837] Client rotation is on, will bootstrap in background
	Aug 19 13:37:22 old-k8s-version-620467 kubelet[6547]: I0819 13:37:22.632840    6547 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 19 13:37:22 old-k8s-version-620467 kubelet[6547]: W0819 13:37:22.634031    6547 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 19 13:37:22 old-k8s-version-620467 kubelet[6547]: I0819 13:37:22.634332    6547 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-620467 -n old-k8s-version-620467
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-620467 -n old-k8s-version-620467: exit status 2 (247.250509ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-620467" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (460.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-038572 -n default-k8s-diff-port-038572
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-19 13:41:15.43039873 +0000 UTC m=+6276.065189005
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-038572 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-038572 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.721µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-038572 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038572 -n default-k8s-diff-port-038572
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-038572 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-038572 logs -n 25: (2.036948229s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-093052                  | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-093052 --memory=2200 --alsologtostderr   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-038572  | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC |                     |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-093052 image list                           | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-441627                  | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-620467        | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-823470            | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-038572       | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:24 UTC |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-620467             | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-823470                 | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC | 19 Aug 24 13:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:40 UTC | 19 Aug 24 13:40 UTC |
	| delete  | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:40 UTC | 19 Aug 24 13:40 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 13:17:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 13:17:40.922225 2153523 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:17:40.922462 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922470 2153523 out.go:358] Setting ErrFile to fd 2...
	I0819 13:17:40.922475 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922631 2153523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:17:40.923154 2153523 out.go:352] Setting JSON to false
	I0819 13:17:40.924134 2153523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":165611,"bootTime":1723907850,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:17:40.924233 2153523 start.go:139] virtualization: kvm guest
	I0819 13:17:40.927254 2153523 out.go:177] * [embed-certs-823470] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:17:40.928415 2153523 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:17:40.928457 2153523 notify.go:220] Checking for updates...
	I0819 13:17:40.930421 2153523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:17:40.931440 2153523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:17:40.932454 2153523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:17:40.933815 2153523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:17:40.935189 2153523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:17:40.936761 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:17:40.937143 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.937209 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.952461 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0819 13:17:40.952868 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.953423 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.953455 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.953809 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.954009 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:40.954280 2153523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:17:40.954572 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.954610 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.970308 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0819 13:17:40.970901 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.971446 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.971478 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.971798 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.972002 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:41.006606 2153523 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 13:17:41.007689 2153523 start.go:297] selected driver: kvm2
	I0819 13:17:41.007712 2153523 start.go:901] validating driver "kvm2" against &{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.007878 2153523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:17:41.008964 2153523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.009068 2153523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:17:41.024433 2153523 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:17:41.024970 2153523 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:17:41.025027 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:17:41.025044 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:17:41.025112 2153523 start.go:340] cluster config:
	{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.025259 2153523 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.027074 2153523 out.go:177] * Starting "embed-certs-823470" primary control-plane node in "embed-certs-823470" cluster
	I0819 13:17:37.004434 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:41.028219 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:17:41.028273 2153523 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:17:41.028296 2153523 cache.go:56] Caching tarball of preloaded images
	I0819 13:17:41.028407 2153523 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:17:41.028424 2153523 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 13:17:41.028580 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:17:41.028861 2153523 start.go:360] acquireMachinesLock for embed-certs-823470: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:17:43.084440 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:46.156450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:52.236466 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:55.308454 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:01.388472 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:04.460482 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:10.540493 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:13.612449 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:19.692412 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:22.764436 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:28.844443 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:31.916476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:37.996470 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:41.068499 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:47.148488 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:50.220507 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:56.300450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:59.372511 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:05.452504 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:08.524521 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:14.604456 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:17.676406 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:23.756476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:26.828515 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:32.908445 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:35.980361 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:38.985083 2152365 start.go:364] duration metric: took 4m0.580875322s to acquireMachinesLock for "default-k8s-diff-port-038572"
	I0819 13:19:38.985152 2152365 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:38.985171 2152365 fix.go:54] fixHost starting: 
	I0819 13:19:38.985542 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:38.985578 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:39.001015 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I0819 13:19:39.001515 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:39.002024 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:19:39.002048 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:39.002379 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:39.002572 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:39.002734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:19:39.004320 2152365 fix.go:112] recreateIfNeeded on default-k8s-diff-port-038572: state=Stopped err=<nil>
	I0819 13:19:39.004361 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	W0819 13:19:39.004545 2152365 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:39.006073 2152365 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-038572" ...
	I0819 13:19:38.982266 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:38.982325 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982657 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:19:38.982688 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982924 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:19:38.984929 2151869 machine.go:96] duration metric: took 4m37.397817873s to provisionDockerMachine
	I0819 13:19:38.984974 2151869 fix.go:56] duration metric: took 4m37.421262151s for fixHost
	I0819 13:19:38.984981 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 4m37.421326953s
	W0819 13:19:38.985006 2151869 start.go:714] error starting host: provision: host is not running
	W0819 13:19:38.985119 2151869 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 13:19:38.985132 2151869 start.go:729] Will try again in 5 seconds ...
	I0819 13:19:39.007314 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Start
	I0819 13:19:39.007497 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring networks are active...
	I0819 13:19:39.008358 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network default is active
	I0819 13:19:39.008748 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network mk-default-k8s-diff-port-038572 is active
	I0819 13:19:39.009091 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Getting domain xml...
	I0819 13:19:39.009732 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Creating domain...
	I0819 13:19:40.228941 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting to get IP...
	I0819 13:19:40.229930 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230312 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230369 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.230279 2153980 retry.go:31] will retry after 222.94238ms: waiting for machine to come up
	I0819 13:19:40.455010 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455640 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.455562 2153980 retry.go:31] will retry after 314.241366ms: waiting for machine to come up
	I0819 13:19:40.771105 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771564 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771593 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.771511 2153980 retry.go:31] will retry after 475.191571ms: waiting for machine to come up
	I0819 13:19:41.248037 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248436 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248462 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.248377 2153980 retry.go:31] will retry after 547.125638ms: waiting for machine to come up
	I0819 13:19:41.797263 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797702 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.797653 2153980 retry.go:31] will retry after 581.6163ms: waiting for machine to come up
	I0819 13:19:42.380743 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381257 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:42.381195 2153980 retry.go:31] will retry after 661.080111ms: waiting for machine to come up
	I0819 13:19:43.044143 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044615 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044645 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.044566 2153980 retry.go:31] will retry after 815.359371ms: waiting for machine to come up
	I0819 13:19:43.986791 2151869 start.go:360] acquireMachinesLock for no-preload-441627: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:19:43.861792 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862277 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.862175 2153980 retry.go:31] will retry after 1.076851262s: waiting for machine to come up
	I0819 13:19:44.940944 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941544 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:44.941431 2153980 retry.go:31] will retry after 1.492394345s: waiting for machine to come up
	I0819 13:19:46.436200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436720 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:46.436617 2153980 retry.go:31] will retry after 1.567728586s: waiting for machine to come up
	I0819 13:19:48.006390 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006917 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:48.006810 2153980 retry.go:31] will retry after 1.799313771s: waiting for machine to come up
	I0819 13:19:49.807716 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808103 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808135 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:49.808029 2153980 retry.go:31] will retry after 2.910106345s: waiting for machine to come up
	I0819 13:19:52.722235 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722644 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:52.722554 2153980 retry.go:31] will retry after 2.846975794s: waiting for machine to come up
	I0819 13:19:56.801408 2152780 start.go:364] duration metric: took 3m10.884201023s to acquireMachinesLock for "old-k8s-version-620467"
	I0819 13:19:56.801488 2152780 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:56.801498 2152780 fix.go:54] fixHost starting: 
	I0819 13:19:56.802005 2152780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:56.802046 2152780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:56.819893 2152780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0819 13:19:56.820323 2152780 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:56.820785 2152780 main.go:141] libmachine: Using API Version  1
	I0819 13:19:56.820813 2152780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:56.821126 2152780 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:56.821314 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:19:56.821452 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetState
	I0819 13:19:56.823015 2152780 fix.go:112] recreateIfNeeded on old-k8s-version-620467: state=Stopped err=<nil>
	I0819 13:19:56.823056 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	W0819 13:19:56.823300 2152780 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:56.824822 2152780 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-620467" ...
	I0819 13:19:55.571761 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572317 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Found IP for machine: 192.168.72.69
	I0819 13:19:55.572351 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has current primary IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserving static IP address...
	I0819 13:19:55.572819 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.572863 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | skip adding static IP to network mk-default-k8s-diff-port-038572 - found existing host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"}
	I0819 13:19:55.572879 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserved static IP address: 192.168.72.69
	I0819 13:19:55.572897 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for SSH to be available...
	I0819 13:19:55.572914 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Getting to WaitForSSH function...
	I0819 13:19:55.575208 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575636 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.575662 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575751 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH client type: external
	I0819 13:19:55.575797 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa (-rw-------)
	I0819 13:19:55.575834 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:19:55.575851 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | About to run SSH command:
	I0819 13:19:55.575895 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | exit 0
	I0819 13:19:55.696082 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | SSH cmd err, output: <nil>: 
	I0819 13:19:55.696501 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetConfigRaw
	I0819 13:19:55.697226 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:55.699788 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700112 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.700146 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700429 2152365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/config.json ...
	I0819 13:19:55.700717 2152365 machine.go:93] provisionDockerMachine start ...
	I0819 13:19:55.700742 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:55.700975 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.703239 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703521 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.703556 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.703887 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704055 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.704360 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.704635 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.704651 2152365 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:19:55.804602 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:19:55.804632 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.804922 2152365 buildroot.go:166] provisioning hostname "default-k8s-diff-port-038572"
	I0819 13:19:55.804951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.805155 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.807793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.808144 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808318 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.808510 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808641 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808750 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.808888 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.809108 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.809123 2152365 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-038572 && echo "default-k8s-diff-port-038572" | sudo tee /etc/hostname
	I0819 13:19:55.922378 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-038572
	
	I0819 13:19:55.922428 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.925767 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926131 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.926172 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926350 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.926563 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926747 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.927130 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.927403 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.927433 2152365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-038572' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-038572/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-038572' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:19:56.033031 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:56.033082 2152365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:19:56.033108 2152365 buildroot.go:174] setting up certificates
	I0819 13:19:56.033121 2152365 provision.go:84] configureAuth start
	I0819 13:19:56.033132 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:56.033450 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.036371 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.036898 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.036926 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.037063 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.039494 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039814 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.039844 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039962 2152365 provision.go:143] copyHostCerts
	I0819 13:19:56.040055 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:19:56.040073 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:19:56.040181 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:19:56.040329 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:19:56.040342 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:19:56.040394 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:19:56.040582 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:19:56.040611 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:19:56.040666 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:19:56.040752 2152365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-038572 san=[127.0.0.1 192.168.72.69 default-k8s-diff-port-038572 localhost minikube]
	I0819 13:19:56.155224 2152365 provision.go:177] copyRemoteCerts
	I0819 13:19:56.155322 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:19:56.155366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.157977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158360 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.158393 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158573 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.158780 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.159002 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.159203 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.238700 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:19:56.263555 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0819 13:19:56.288212 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:19:56.311817 2152365 provision.go:87] duration metric: took 278.679594ms to configureAuth
	I0819 13:19:56.311851 2152365 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:19:56.312067 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:19:56.312187 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.315149 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315598 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.315630 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315801 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.315977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316115 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316244 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.316426 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.316653 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.316770 2152365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:19:56.575936 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:19:56.575977 2152365 machine.go:96] duration metric: took 875.241726ms to provisionDockerMachine
	I0819 13:19:56.576012 2152365 start.go:293] postStartSetup for "default-k8s-diff-port-038572" (driver="kvm2")
	I0819 13:19:56.576030 2152365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:19:56.576068 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.576426 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:19:56.576468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.579017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579420 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.579453 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579553 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.579726 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.579891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.580030 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.661017 2152365 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:19:56.665287 2152365 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:19:56.665315 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:19:56.665387 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:19:56.665469 2152365 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:19:56.665559 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:19:56.675115 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:19:56.698728 2152365 start.go:296] duration metric: took 122.697439ms for postStartSetup
	I0819 13:19:56.698771 2152365 fix.go:56] duration metric: took 17.713602497s for fixHost
	I0819 13:19:56.698793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.701384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701733 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.701766 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.702116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702284 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702444 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.702625 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.702822 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.702836 2152365 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:19:56.801168 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073596.772802095
	
	I0819 13:19:56.801210 2152365 fix.go:216] guest clock: 1724073596.772802095
	I0819 13:19:56.801223 2152365 fix.go:229] Guest: 2024-08-19 13:19:56.772802095 +0000 UTC Remote: 2024-08-19 13:19:56.698775345 +0000 UTC m=+258.438276725 (delta=74.02675ms)
	I0819 13:19:56.801278 2152365 fix.go:200] guest clock delta is within tolerance: 74.02675ms
	I0819 13:19:56.801287 2152365 start.go:83] releasing machines lock for "default-k8s-diff-port-038572", held for 17.81615508s
	I0819 13:19:56.801326 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.801669 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.804805 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805198 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.805242 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805377 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806035 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806291 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806387 2152365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:19:56.806442 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.806562 2152365 ssh_runner.go:195] Run: cat /version.json
	I0819 13:19:56.806592 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.809097 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809446 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.809476 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809513 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809599 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.809776 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.809951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810028 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.810053 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.810115 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.810256 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.810456 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.810627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810777 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.905573 2152365 ssh_runner.go:195] Run: systemctl --version
	I0819 13:19:56.911395 2152365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:19:57.053460 2152365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:19:57.060012 2152365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:19:57.060091 2152365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:19:57.076651 2152365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:19:57.076678 2152365 start.go:495] detecting cgroup driver to use...
	I0819 13:19:57.076746 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:19:57.095982 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:19:57.109198 2152365 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:19:57.109282 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:19:57.124552 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:19:57.141665 2152365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:19:57.250580 2152365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:19:57.395902 2152365 docker.go:233] disabling docker service ...
	I0819 13:19:57.395986 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:19:57.411232 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:19:57.424263 2152365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:19:57.576953 2152365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:19:57.695946 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:19:57.709501 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:19:57.727815 2152365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:19:57.727889 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.738210 2152365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:19:57.738284 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.748534 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.759949 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.770744 2152365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:19:57.781844 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.792047 2152365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.810916 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.826741 2152365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:19:57.839687 2152365 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:19:57.839753 2152365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:19:57.855192 2152365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:19:57.865326 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:19:57.991646 2152365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:19:58.136425 2152365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:19:58.136522 2152365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:19:58.141971 2152365 start.go:563] Will wait 60s for crictl version
	I0819 13:19:58.142047 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:19:58.146029 2152365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:19:58.182474 2152365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:19:58.182580 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.212887 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.242578 2152365 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:19:58.243711 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:58.246652 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.246985 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:58.247031 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.247262 2152365 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 13:19:58.251445 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:19:58.266177 2152365 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:19:58.266324 2152365 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:19:58.266417 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:19:56.826004 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .Start
	I0819 13:19:56.826161 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring networks are active...
	I0819 13:19:56.826953 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network default is active
	I0819 13:19:56.827328 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network mk-old-k8s-version-620467 is active
	I0819 13:19:56.827789 2152780 main.go:141] libmachine: (old-k8s-version-620467) Getting domain xml...
	I0819 13:19:56.828485 2152780 main.go:141] libmachine: (old-k8s-version-620467) Creating domain...
	I0819 13:19:58.099058 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting to get IP...
	I0819 13:19:58.099866 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.100256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.100359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.100246 2154107 retry.go:31] will retry after 217.468132ms: waiting for machine to come up
	I0819 13:19:58.319821 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.320335 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.320359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.320295 2154107 retry.go:31] will retry after 270.947251ms: waiting for machine to come up
	I0819 13:19:58.592897 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.593586 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.593610 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.593526 2154107 retry.go:31] will retry after 389.864812ms: waiting for machine to come up
	I0819 13:19:58.985269 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.985722 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.985757 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.985674 2154107 retry.go:31] will retry after 559.916587ms: waiting for machine to come up
	I0819 13:19:59.547846 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:59.548504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:59.548538 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:59.548446 2154107 retry.go:31] will retry after 759.198751ms: waiting for machine to come up
	I0819 13:20:00.309569 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:00.310093 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:00.310126 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:00.310041 2154107 retry.go:31] will retry after 836.488095ms: waiting for machine to come up
	I0819 13:19:58.306336 2152365 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:19:58.306490 2152365 ssh_runner.go:195] Run: which lz4
	I0819 13:19:58.311147 2152365 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:19:58.315715 2152365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:19:58.315751 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:19:59.751536 2152365 crio.go:462] duration metric: took 1.440444093s to copy over tarball
	I0819 13:19:59.751611 2152365 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:01.909604 2152365 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157959479s)
	I0819 13:20:01.909646 2152365 crio.go:469] duration metric: took 2.158078046s to extract the tarball
	I0819 13:20:01.909667 2152365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:01.954789 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:02.002755 2152365 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:02.002786 2152365 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:02.002805 2152365 kubeadm.go:934] updating node { 192.168.72.69 8444 v1.31.0 crio true true} ...
	I0819 13:20:02.002970 2152365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-038572 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:02.003061 2152365 ssh_runner.go:195] Run: crio config
	I0819 13:20:02.061207 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:02.061238 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:02.061256 2152365 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:02.061295 2152365 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.69 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-038572 NodeName:default-k8s-diff-port-038572 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:02.061496 2152365 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.69
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-038572"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:02.061628 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:02.072573 2152365 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:02.072676 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:02.083114 2152365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0819 13:20:02.100736 2152365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:02.117359 2152365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 13:20:02.135642 2152365 ssh_runner.go:195] Run: grep 192.168.72.69	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:02.139682 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:02.152242 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:02.278949 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:02.296896 2152365 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572 for IP: 192.168.72.69
	I0819 13:20:02.296925 2152365 certs.go:194] generating shared ca certs ...
	I0819 13:20:02.296949 2152365 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:02.297166 2152365 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:02.297230 2152365 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:02.297248 2152365 certs.go:256] generating profile certs ...
	I0819 13:20:02.297375 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/client.key
	I0819 13:20:02.297441 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key.ca317f39
	I0819 13:20:02.297489 2152365 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key
	I0819 13:20:02.297641 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:02.297692 2152365 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:02.297705 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:02.297730 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:02.297752 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:02.297778 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:02.297821 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:02.298812 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:02.332660 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:02.376995 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:02.418243 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:02.455354 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 13:20:02.488839 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:02.514418 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:02.540797 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:20:02.565628 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:02.590617 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:02.614942 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:02.639455 2152365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:02.657597 2152365 ssh_runner.go:195] Run: openssl version
	I0819 13:20:02.663545 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:02.674396 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679061 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679138 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.685136 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:02.696428 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:02.707271 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.711988 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.712052 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.717933 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:02.728982 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:02.739866 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744728 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744799 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.750660 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:02.761203 2152365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:02.765996 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:02.772371 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:02.778453 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:02.784645 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:02.790342 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:02.795882 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:02.801355 2152365 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:02.801448 2152365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:02.801536 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.839586 2152365 cri.go:89] found id: ""
	I0819 13:20:02.839684 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:02.850336 2152365 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:02.850363 2152365 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:02.850424 2152365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:02.860676 2152365 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:02.861753 2152365 kubeconfig.go:125] found "default-k8s-diff-port-038572" server: "https://192.168.72.69:8444"
	I0819 13:20:02.863979 2152365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:02.875284 2152365 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.69
	I0819 13:20:02.875321 2152365 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:02.875337 2152365 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:02.875398 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.914277 2152365 cri.go:89] found id: ""
	I0819 13:20:02.914383 2152365 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:02.931976 2152365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:02.941897 2152365 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:02.941924 2152365 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:02.941994 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 13:20:02.951103 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:02.951179 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:02.960642 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 13:20:02.970204 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:02.970278 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:02.980235 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.989306 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:02.989371 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.998821 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 13:20:03.007907 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:03.007978 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:03.017698 2152365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:03.027048 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:03.158491 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:01.148388 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:01.148845 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:01.148874 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:01.148801 2154107 retry.go:31] will retry after 1.045849266s: waiting for machine to come up
	I0819 13:20:02.196768 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:02.197307 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:02.197341 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:02.197243 2154107 retry.go:31] will retry after 1.09397017s: waiting for machine to come up
	I0819 13:20:03.292733 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:03.293195 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:03.293233 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:03.293141 2154107 retry.go:31] will retry after 1.29393597s: waiting for machine to come up
	I0819 13:20:04.588965 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:04.589429 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:04.589463 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:04.589356 2154107 retry.go:31] will retry after 1.963033352s: waiting for machine to come up
	I0819 13:20:04.318435 2152365 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.159898195s)
	I0819 13:20:04.318475 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.536005 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.600370 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.701830 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:04.701954 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.202875 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.702925 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.718529 2152365 api_server.go:72] duration metric: took 1.01671668s to wait for apiserver process to appear ...
	I0819 13:20:05.718570 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:05.718598 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.393400 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.393440 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.393459 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.434960 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.434992 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.719450 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.725655 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:08.725689 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.218770 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.224234 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:09.224273 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.718902 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.727048 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:20:09.733609 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:09.733641 2152365 api_server.go:131] duration metric: took 4.015063389s to wait for apiserver health ...
	I0819 13:20:09.733652 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:09.733660 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:09.735422 2152365 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:06.553692 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:06.554176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:06.554208 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:06.554119 2154107 retry.go:31] will retry after 2.420122298s: waiting for machine to come up
	I0819 13:20:08.975682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:08.976129 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:08.976176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:08.976073 2154107 retry.go:31] will retry after 2.64333029s: waiting for machine to come up
	I0819 13:20:09.736901 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:09.747568 2152365 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:09.766156 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:09.777905 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:09.777944 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:09.777953 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:09.777963 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:09.777978 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:09.777990 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:20:09.778006 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:09.778017 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:09.778029 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:20:09.778040 2152365 system_pods.go:74] duration metric: took 11.855811ms to wait for pod list to return data ...
	I0819 13:20:09.778050 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:09.787946 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:09.787991 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:09.788010 2152365 node_conditions.go:105] duration metric: took 9.95462ms to run NodePressure ...
	I0819 13:20:09.788035 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:10.079231 2152365 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083839 2152365 kubeadm.go:739] kubelet initialised
	I0819 13:20:10.083865 2152365 kubeadm.go:740] duration metric: took 4.599976ms waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083876 2152365 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:10.089020 2152365 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.094701 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094738 2152365 pod_ready.go:82] duration metric: took 5.686754ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.094751 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094760 2152365 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.099257 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099292 2152365 pod_ready.go:82] duration metric: took 4.514562ms for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.099308 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099317 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.103745 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103778 2152365 pod_ready.go:82] duration metric: took 4.449246ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.103799 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103811 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.170603 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170636 2152365 pod_ready.go:82] duration metric: took 66.80935ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.170649 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170655 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.569837 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569869 2152365 pod_ready.go:82] duration metric: took 399.203757ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.569879 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569892 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.970515 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970553 2152365 pod_ready.go:82] duration metric: took 400.651091ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.970571 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970600 2152365 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:11.371258 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371306 2152365 pod_ready.go:82] duration metric: took 400.692872ms for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:11.371324 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371335 2152365 pod_ready.go:39] duration metric: took 1.287441542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:11.371365 2152365 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:20:11.387535 2152365 ops.go:34] apiserver oom_adj: -16
	I0819 13:20:11.387559 2152365 kubeadm.go:597] duration metric: took 8.537187666s to restartPrimaryControlPlane
	I0819 13:20:11.387568 2152365 kubeadm.go:394] duration metric: took 8.586226152s to StartCluster
	I0819 13:20:11.387585 2152365 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.387664 2152365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:11.389218 2152365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.389474 2152365 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:20:11.389580 2152365 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:20:11.389706 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:11.389711 2152365 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389755 2152365 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-038572"
	I0819 13:20:11.389764 2152365 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-038572"
	W0819 13:20:11.389767 2152365 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:20:11.389751 2152365 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389804 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.389806 2152365 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.389818 2152365 addons.go:243] addon metrics-server should already be in state true
	I0819 13:20:11.389827 2152365 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-038572"
	I0819 13:20:11.389849 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.390217 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390246 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390254 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390283 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390305 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390341 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.391417 2152365 out.go:177] * Verifying Kubernetes components...
	I0819 13:20:11.392645 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:11.406609 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0819 13:20:11.407081 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.407694 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.407723 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.408143 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.408421 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.411206 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0819 13:20:11.411468 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0819 13:20:11.411740 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.411900 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.412296 2152365 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.412319 2152365 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:20:11.412335 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412351 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.412393 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412482 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412508 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412744 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.412758 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.412793 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.412794 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.413326 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413350 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413374 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.413399 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.429280 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40269
	I0819 13:20:11.429783 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.430330 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.430354 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.430797 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.430981 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.433018 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.433214 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I0819 13:20:11.433651 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.433708 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0819 13:20:11.434138 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434152 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434199 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.434455 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.434636 2152365 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:11.434806 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434830 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434988 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.435019 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.435164 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.435375 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.435924 2152365 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.435944 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:20:11.435965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.437309 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.438592 2152365 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:20:11.439482 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.439584 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:20:11.439598 2152365 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:20:11.439617 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.439905 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.439943 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.440077 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.440243 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.440399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.440517 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.442627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.442965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.442990 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.443134 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.443274 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.443500 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.443627 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.457318 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0819 13:20:11.457870 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.458444 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.458481 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.458854 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.459017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.460870 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.461105 2152365 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.461121 2152365 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:20:11.461138 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.463902 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.464415 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464581 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.464764 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.464942 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.465089 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.620783 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:11.641158 2152365 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:11.745067 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.783221 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:20:11.783260 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:20:11.791134 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.848325 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:20:11.848359 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:20:11.955559 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:11.955670 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:20:12.022022 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:12.892823 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.101637914s)
	I0819 13:20:12.892896 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.892909 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.892966 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147858081s)
	I0819 13:20:12.893019 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893033 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893201 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893327 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893354 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893366 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893375 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893377 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893386 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893391 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893630 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893681 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893704 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893677 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893729 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893657 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.899954 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.899970 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.900202 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.900220 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.900223 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972429 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972465 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.972842 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972866 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.972882 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.972897 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.973181 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.973225 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.973235 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.973289 2152365 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-038572"
	I0819 13:20:12.975602 2152365 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:20:12.976614 2152365 addons.go:510] duration metric: took 1.587039687s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:20:15.704949 2153523 start.go:364] duration metric: took 2m34.676043713s to acquireMachinesLock for "embed-certs-823470"
	I0819 13:20:15.705034 2153523 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:15.705058 2153523 fix.go:54] fixHost starting: 
	I0819 13:20:15.705598 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:15.705649 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:15.726423 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0819 13:20:15.726876 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:15.727565 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:20:15.727603 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:15.727986 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:15.728273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:15.728457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:20:15.730123 2153523 fix.go:112] recreateIfNeeded on embed-certs-823470: state=Stopped err=<nil>
	I0819 13:20:15.730167 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	W0819 13:20:15.730354 2153523 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:15.732004 2153523 out.go:177] * Restarting existing kvm2 VM for "embed-certs-823470" ...
	I0819 13:20:11.621327 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:11.621751 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:11.621783 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:11.621716 2154107 retry.go:31] will retry after 2.756598641s: waiting for machine to come up
	I0819 13:20:14.379562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380111 2152780 main.go:141] libmachine: (old-k8s-version-620467) Found IP for machine: 192.168.39.238
	I0819 13:20:14.380135 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserving static IP address...
	I0819 13:20:14.380149 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has current primary IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380616 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.380648 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserved static IP address: 192.168.39.238
	I0819 13:20:14.380673 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | skip adding static IP to network mk-old-k8s-version-620467 - found existing host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"}
	I0819 13:20:14.380687 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Getting to WaitForSSH function...
	I0819 13:20:14.380699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting for SSH to be available...
	I0819 13:20:14.383088 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383484 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.383516 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383650 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH client type: external
	I0819 13:20:14.383682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa (-rw-------)
	I0819 13:20:14.383720 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:14.383731 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | About to run SSH command:
	I0819 13:20:14.383745 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | exit 0
	I0819 13:20:14.508402 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:14.508766 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetConfigRaw
	I0819 13:20:14.509427 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.512053 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.512573 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512759 2152780 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/config.json ...
	I0819 13:20:14.512944 2152780 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:14.512962 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:14.513159 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.515266 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515554 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.515588 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515743 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.515901 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516039 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516142 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.516308 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.516582 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.516597 2152780 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:14.628611 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:14.628650 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.628968 2152780 buildroot.go:166] provisioning hostname "old-k8s-version-620467"
	I0819 13:20:14.629003 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.629202 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.631844 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632148 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.632211 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632516 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.632706 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632870 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632975 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.633140 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.633383 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.633400 2152780 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-620467 && echo "old-k8s-version-620467" | sudo tee /etc/hostname
	I0819 13:20:14.759803 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-620467
	
	I0819 13:20:14.759840 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.763058 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763513 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.763553 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763807 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.764027 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764208 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764523 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.764729 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.764986 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.765013 2152780 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-620467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-620467/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-620467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:14.890791 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:14.890828 2152780 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:14.890861 2152780 buildroot.go:174] setting up certificates
	I0819 13:20:14.890877 2152780 provision.go:84] configureAuth start
	I0819 13:20:14.890890 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.891337 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.894171 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894527 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.894562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894695 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.897240 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.897791 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.897825 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.898011 2152780 provision.go:143] copyHostCerts
	I0819 13:20:14.898080 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:14.898104 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:14.898169 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:14.898317 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:14.898330 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:14.898360 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:14.898434 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:14.898444 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:14.898472 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:14.898542 2152780 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-620467 san=[127.0.0.1 192.168.39.238 localhost minikube old-k8s-version-620467]
	I0819 13:20:15.009796 2152780 provision.go:177] copyRemoteCerts
	I0819 13:20:15.009880 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:15.009931 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.012857 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013153 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.013187 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013486 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.013699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.013862 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.013990 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.104066 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:15.131682 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 13:20:15.158717 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:15.186486 2152780 provision.go:87] duration metric: took 295.591794ms to configureAuth
	I0819 13:20:15.186518 2152780 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:15.186739 2152780 config.go:182] Loaded profile config "old-k8s-version-620467": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 13:20:15.186833 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.189580 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.189997 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.190028 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.190212 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.190389 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190528 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190646 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.190784 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.190958 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.190973 2152780 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:15.461836 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:15.461866 2152780 machine.go:96] duration metric: took 948.910153ms to provisionDockerMachine
	I0819 13:20:15.461878 2152780 start.go:293] postStartSetup for "old-k8s-version-620467" (driver="kvm2")
	I0819 13:20:15.461894 2152780 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:15.461913 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.462313 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:15.462348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.465294 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465707 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.465743 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465949 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.466136 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.466302 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.466443 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.551428 2152780 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:15.556041 2152780 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:15.556063 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:15.556137 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:15.556267 2152780 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:15.556410 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:15.565971 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:15.592103 2152780 start.go:296] duration metric: took 130.206992ms for postStartSetup
	I0819 13:20:15.592153 2152780 fix.go:56] duration metric: took 18.790655174s for fixHost
	I0819 13:20:15.592201 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.595223 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595583 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.595623 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595769 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.596033 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596249 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596391 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.596578 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.596767 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.596780 2152780 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:15.704773 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073615.677156491
	
	I0819 13:20:15.704802 2152780 fix.go:216] guest clock: 1724073615.677156491
	I0819 13:20:15.704811 2152780 fix.go:229] Guest: 2024-08-19 13:20:15.677156491 +0000 UTC Remote: 2024-08-19 13:20:15.592158206 +0000 UTC m=+209.822213223 (delta=84.998285ms)
	I0819 13:20:15.704854 2152780 fix.go:200] guest clock delta is within tolerance: 84.998285ms
	I0819 13:20:15.704862 2152780 start.go:83] releasing machines lock for "old-k8s-version-620467", held for 18.903410478s
	I0819 13:20:15.704893 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.705211 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:15.708190 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708528 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.708558 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708733 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709538 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709631 2152780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:15.709682 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.709793 2152780 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:15.709821 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.712366 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712759 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.712787 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712818 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712943 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713113 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713309 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.713374 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.713407 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.713481 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.713573 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713714 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713872 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.714033 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.732993 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Start
	I0819 13:20:15.733180 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring networks are active...
	I0819 13:20:15.734170 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network default is active
	I0819 13:20:15.734566 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network mk-embed-certs-823470 is active
	I0819 13:20:15.734992 2153523 main.go:141] libmachine: (embed-certs-823470) Getting domain xml...
	I0819 13:20:15.735675 2153523 main.go:141] libmachine: (embed-certs-823470) Creating domain...
	I0819 13:20:15.818839 2152780 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:15.827091 2152780 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:15.989552 2152780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:15.995991 2152780 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:15.996067 2152780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:16.012594 2152780 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:16.012622 2152780 start.go:495] detecting cgroup driver to use...
	I0819 13:20:16.012691 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:16.030071 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:16.048651 2152780 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:16.048715 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:16.063500 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:16.078370 2152780 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:16.194353 2152780 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:16.347921 2152780 docker.go:233] disabling docker service ...
	I0819 13:20:16.347992 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:16.363528 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:16.377515 2152780 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:16.526660 2152780 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:16.672207 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:16.686323 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:16.707555 2152780 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 13:20:16.707634 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.718679 2152780 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:16.718775 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.729446 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.740145 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.751813 2152780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:16.767680 2152780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:16.779680 2152780 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:16.779754 2152780 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:16.798205 2152780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:16.812684 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:16.999666 2152780 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:17.159585 2152780 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:17.159673 2152780 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:17.165707 2152780 start.go:563] Will wait 60s for crictl version
	I0819 13:20:17.165781 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:17.170949 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:17.222170 2152780 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:17.222248 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.250627 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.278578 2152780 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 13:20:13.644589 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:15.645507 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:18.146247 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:17.279632 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:17.282855 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:17.283289 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283501 2152780 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:17.288251 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:17.301778 2152780 kubeadm.go:883] updating cluster {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:17.301950 2152780 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 13:20:17.302022 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:17.348389 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:17.348477 2152780 ssh_runner.go:195] Run: which lz4
	I0819 13:20:17.352759 2152780 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:17.357046 2152780 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:17.357075 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 13:20:19.106587 2152780 crio.go:462] duration metric: took 1.753868264s to copy over tarball
	I0819 13:20:19.106686 2152780 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:17.131501 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting to get IP...
	I0819 13:20:17.132786 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.133215 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.133322 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.133191 2154333 retry.go:31] will retry after 299.991818ms: waiting for machine to come up
	I0819 13:20:17.434764 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.435181 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.435207 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.435148 2154333 retry.go:31] will retry after 236.856343ms: waiting for machine to come up
	I0819 13:20:17.673895 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.674570 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.674599 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.674464 2154333 retry.go:31] will retry after 393.585081ms: waiting for machine to come up
	I0819 13:20:18.070571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.071042 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.071102 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.070993 2154333 retry.go:31] will retry after 544.808617ms: waiting for machine to come up
	I0819 13:20:18.617933 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.618430 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.618462 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.618383 2154333 retry.go:31] will retry after 679.633919ms: waiting for machine to come up
	I0819 13:20:19.299719 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.300300 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.300332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.300244 2154333 retry.go:31] will retry after 612.319499ms: waiting for machine to come up
	I0819 13:20:19.914955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.915514 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.915550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.915448 2154333 retry.go:31] will retry after 831.709673ms: waiting for machine to come up
	I0819 13:20:20.748907 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:20.749441 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:20.749470 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:20.749386 2154333 retry.go:31] will retry after 1.039050841s: waiting for machine to come up
	I0819 13:20:19.147118 2152365 node_ready.go:49] node "default-k8s-diff-port-038572" has status "Ready":"True"
	I0819 13:20:19.147148 2152365 node_ready.go:38] duration metric: took 7.505954816s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:19.147159 2152365 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:19.156331 2152365 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.164963 2152365 pod_ready.go:93] pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:19.164994 2152365 pod_ready.go:82] duration metric: took 8.624394ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.165007 2152365 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:21.171762 2152365 pod_ready.go:103] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:22.814104 2152365 pod_ready.go:93] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.814140 2152365 pod_ready.go:82] duration metric: took 3.649123155s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.814157 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820468 2152365 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.820494 2152365 pod_ready.go:82] duration metric: took 6.328831ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820505 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826256 2152365 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.826288 2152365 pod_ready.go:82] duration metric: took 5.775237ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826302 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832133 2152365 pod_ready.go:93] pod "kube-proxy-wnkk2" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.832182 2152365 pod_ready.go:82] duration metric: took 5.850742ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832198 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837910 2152365 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.837938 2152365 pod_ready.go:82] duration metric: took 5.730926ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837952 2152365 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.098419 2152780 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.991694091s)
	I0819 13:20:22.098466 2152780 crio.go:469] duration metric: took 2.991838198s to extract the tarball
	I0819 13:20:22.098476 2152780 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:22.143215 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:22.183884 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:22.183917 2152780 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:22.183992 2152780 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.184012 2152780 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.184040 2152780 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 13:20:22.184050 2152780 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.183999 2152780 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.184002 2152780 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.184087 2152780 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.184133 2152780 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185819 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.185861 2152780 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 13:20:22.185868 2152780 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.185909 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.185940 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185825 2152780 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.355834 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.356969 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.361175 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.382402 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.393972 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.417644 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 13:20:22.451269 2152780 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 13:20:22.451335 2152780 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.451333 2152780 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 13:20:22.451374 2152780 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.451393 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.451428 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.479570 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.490369 2152780 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 13:20:22.490426 2152780 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.490484 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.519930 2152780 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 13:20:22.519990 2152780 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.520062 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.527794 2152780 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 13:20:22.527836 2152780 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.527922 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540287 2152780 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 13:20:22.540337 2152780 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 13:20:22.540362 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.540381 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540423 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665204 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.665271 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.665317 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.665410 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665441 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.665481 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.725932 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.821227 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.825977 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.826179 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.826276 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.826365 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.826517 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.908181 2152780 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 13:20:22.908249 2152780 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.908306 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.974136 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:23.008192 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 13:20:23.008282 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:23.008318 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:23.008372 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 13:20:23.008406 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:23.008416 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.041323 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 13:20:23.119173 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 13:20:23.119212 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.119216 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 13:20:23.119787 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 13:20:23.158596 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.193414 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 13:20:23.193510 2152780 cache_images.go:92] duration metric: took 1.009576721s to LoadCachedImages
	W0819 13:20:23.193613 2152780 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0819 13:20:23.193632 2152780 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.20.0 crio true true} ...
	I0819 13:20:23.193746 2152780 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-620467 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:23.193835 2152780 ssh_runner.go:195] Run: crio config
	I0819 13:20:23.255630 2152780 cni.go:84] Creating CNI manager for ""
	I0819 13:20:23.255661 2152780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:23.255677 2152780 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:23.255711 2152780 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-620467 NodeName:old-k8s-version-620467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 13:20:23.255897 2152780 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-620467"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:23.255980 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 13:20:23.269531 2152780 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:23.269612 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:23.282542 2152780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0819 13:20:23.302755 2152780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:23.322417 2152780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 13:20:23.342984 2152780 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:23.347053 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:23.360095 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:23.497495 2152780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:23.517984 2152780 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467 for IP: 192.168.39.238
	I0819 13:20:23.518059 2152780 certs.go:194] generating shared ca certs ...
	I0819 13:20:23.518125 2152780 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:23.518359 2152780 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:23.518429 2152780 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:23.518442 2152780 certs.go:256] generating profile certs ...
	I0819 13:20:23.518572 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.key
	I0819 13:20:23.518651 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key.53723c14
	I0819 13:20:23.518701 2152780 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key
	I0819 13:20:23.518856 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:23.518895 2152780 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:23.518909 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:23.518941 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:23.518969 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:23.519009 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:23.519073 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:23.519828 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:23.574690 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:23.616732 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:23.659531 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:23.706724 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 13:20:23.742394 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:23.774793 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:23.801731 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:23.827344 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:23.855338 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:23.880700 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:23.906324 2152780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:23.923792 2152780 ssh_runner.go:195] Run: openssl version
	I0819 13:20:23.929722 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:23.940623 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946214 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946316 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.952580 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:23.963550 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:23.974639 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979429 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979515 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.985465 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:23.996822 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:24.008876 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013845 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013909 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.019736 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:24.032573 2152780 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:24.038620 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:24.046070 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:24.052573 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:24.059242 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:24.065487 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:24.072116 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:24.078756 2152780 kubeadm.go:392] StartCluster: {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:24.078878 2152780 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:24.078950 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.122078 2152780 cri.go:89] found id: ""
	I0819 13:20:24.122170 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:24.133909 2152780 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:24.133932 2152780 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:24.133989 2152780 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:24.144440 2152780 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:24.145449 2152780 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-620467" does not appear in /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:24.146113 2152780 kubeconfig.go:62] /home/jenkins/minikube-integration/19479-2086935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-620467" cluster setting kubeconfig missing "old-k8s-version-620467" context setting]
	I0819 13:20:24.147113 2152780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:24.226327 2152780 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:24.236890 2152780 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.238
	I0819 13:20:24.236922 2152780 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:24.236936 2152780 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:24.236978 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.274144 2152780 cri.go:89] found id: ""
	I0819 13:20:24.274220 2152780 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:24.291049 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:24.302678 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:24.302699 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:24.302746 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:24.312055 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:24.312122 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:24.321518 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:24.330507 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:24.330570 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:24.340192 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.349855 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:24.349914 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.359243 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:24.369020 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:24.369100 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:24.379184 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:24.389785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:24.524009 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.159266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.413924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.519529 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.635520 2152780 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:25.635627 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:21.790800 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:21.791370 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:21.791400 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:21.791324 2154333 retry.go:31] will retry after 1.360798876s: waiting for machine to come up
	I0819 13:20:23.153492 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:23.153913 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:23.153942 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:23.153859 2154333 retry.go:31] will retry after 2.180186803s: waiting for machine to come up
	I0819 13:20:25.337305 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:25.337871 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:25.337903 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:25.337719 2154333 retry.go:31] will retry after 1.933341809s: waiting for machine to come up
	I0819 13:20:24.846640 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:27.344995 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:26.135764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:26.636618 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.136024 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.636046 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.135975 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.636411 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.136478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.636585 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.136183 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.636717 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.273276 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:27.273766 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:27.273796 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:27.273706 2154333 retry.go:31] will retry after 3.242792205s: waiting for machine to come up
	I0819 13:20:30.520205 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:30.520644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:30.520679 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:30.520585 2154333 retry.go:31] will retry after 2.921954134s: waiting for machine to come up
	I0819 13:20:29.843436 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:31.843702 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.021186 2151869 start.go:364] duration metric: took 51.034331815s to acquireMachinesLock for "no-preload-441627"
	I0819 13:20:35.021266 2151869 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:35.021275 2151869 fix.go:54] fixHost starting: 
	I0819 13:20:35.021747 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:35.021788 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:35.039025 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0819 13:20:35.039480 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:35.039951 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:20:35.039978 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:35.040398 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:35.040610 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:35.040779 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:20:35.042642 2151869 fix.go:112] recreateIfNeeded on no-preload-441627: state=Stopped err=<nil>
	I0819 13:20:35.042670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	W0819 13:20:35.042852 2151869 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:35.044620 2151869 out.go:177] * Restarting existing kvm2 VM for "no-preload-441627" ...
	I0819 13:20:31.136686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:31.636239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.136372 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.635927 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.136039 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.635721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.136156 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.636069 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.136557 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.636478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.446149 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.446796 2153523 main.go:141] libmachine: (embed-certs-823470) Found IP for machine: 192.168.61.50
	I0819 13:20:33.446825 2153523 main.go:141] libmachine: (embed-certs-823470) Reserving static IP address...
	I0819 13:20:33.446843 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has current primary IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.447326 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.447357 2153523 main.go:141] libmachine: (embed-certs-823470) Reserved static IP address: 192.168.61.50
	I0819 13:20:33.447375 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | skip adding static IP to network mk-embed-certs-823470 - found existing host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"}
	I0819 13:20:33.447390 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Getting to WaitForSSH function...
	I0819 13:20:33.447402 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting for SSH to be available...
	I0819 13:20:33.449644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.449934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.449966 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.450063 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH client type: external
	I0819 13:20:33.450080 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa (-rw-------)
	I0819 13:20:33.450128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:33.450148 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | About to run SSH command:
	I0819 13:20:33.450160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | exit 0
	I0819 13:20:33.580225 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:33.580660 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetConfigRaw
	I0819 13:20:33.581310 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.583950 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584345 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.584382 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584697 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:20:33.584932 2153523 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:33.584961 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:33.585138 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.587429 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587735 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.587768 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587903 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.588123 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588333 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588507 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.588675 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.588873 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.588886 2153523 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:33.696497 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:33.696528 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696770 2153523 buildroot.go:166] provisioning hostname "embed-certs-823470"
	I0819 13:20:33.696788 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.699605 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.699955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.699991 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.700129 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.700374 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700531 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700668 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.700822 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.701007 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.701021 2153523 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-823470 && echo "embed-certs-823470" | sudo tee /etc/hostname
	I0819 13:20:33.826398 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-823470
	
	I0819 13:20:33.826444 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.829353 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.829751 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.829791 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.830030 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.830215 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830383 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830504 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.830662 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.830879 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.830896 2153523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-823470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-823470/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-823470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:33.952843 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:33.952881 2153523 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:33.952908 2153523 buildroot.go:174] setting up certificates
	I0819 13:20:33.952920 2153523 provision.go:84] configureAuth start
	I0819 13:20:33.952930 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.953214 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.955916 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956302 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.956332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956520 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.958787 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959101 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.959128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959257 2153523 provision.go:143] copyHostCerts
	I0819 13:20:33.959321 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:33.959341 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:33.959391 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:33.959505 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:33.959516 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:33.959537 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:33.959593 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:33.959600 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:33.959618 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:33.959670 2153523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.embed-certs-823470 san=[127.0.0.1 192.168.61.50 embed-certs-823470 localhost minikube]
	I0819 13:20:34.309596 2153523 provision.go:177] copyRemoteCerts
	I0819 13:20:34.309658 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:34.309693 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.312341 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312722 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.312757 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312897 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.313100 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.313221 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.313349 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.400305 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:34.423939 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:34.447427 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:34.474337 2153523 provision.go:87] duration metric: took 521.386141ms to configureAuth
	I0819 13:20:34.474374 2153523 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:34.474559 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:34.474655 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.477282 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477721 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.477754 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.478108 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478296 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478443 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.478661 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.478864 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.478886 2153523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:34.768186 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:34.768232 2153523 machine.go:96] duration metric: took 1.183282981s to provisionDockerMachine
	I0819 13:20:34.768244 2153523 start.go:293] postStartSetup for "embed-certs-823470" (driver="kvm2")
	I0819 13:20:34.768255 2153523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:34.768273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:34.768593 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:34.768626 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.771022 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771451 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.771485 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771636 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.771820 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.771979 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.772095 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.862868 2153523 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:34.867619 2153523 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:34.867659 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:34.867746 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:34.867837 2153523 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:34.867931 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:34.877409 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:34.907227 2153523 start.go:296] duration metric: took 138.963836ms for postStartSetup
	I0819 13:20:34.907285 2153523 fix.go:56] duration metric: took 19.202241044s for fixHost
	I0819 13:20:34.907315 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.909900 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910238 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.910273 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910466 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.910671 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910832 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910952 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.911132 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.911306 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.911316 2153523 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:35.020989 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073634.971972974
	
	I0819 13:20:35.021036 2153523 fix.go:216] guest clock: 1724073634.971972974
	I0819 13:20:35.021046 2153523 fix.go:229] Guest: 2024-08-19 13:20:34.971972974 +0000 UTC Remote: 2024-08-19 13:20:34.907292264 +0000 UTC m=+174.021390970 (delta=64.68071ms)
	I0819 13:20:35.021076 2153523 fix.go:200] guest clock delta is within tolerance: 64.68071ms
	I0819 13:20:35.021082 2153523 start.go:83] releasing machines lock for "embed-certs-823470", held for 19.316080353s
	I0819 13:20:35.021111 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.021414 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:35.024146 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024613 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.024651 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024793 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025291 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025478 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025561 2153523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:35.025612 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.025736 2153523 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:35.025759 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.028550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028973 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029003 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029031 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029062 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029272 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029359 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029551 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029597 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029723 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.029741 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029893 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.137494 2153523 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:35.145726 2153523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:35.296420 2153523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:35.303481 2153523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:35.303566 2153523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:35.323781 2153523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:35.323813 2153523 start.go:495] detecting cgroup driver to use...
	I0819 13:20:35.323893 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:35.343648 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:35.357864 2153523 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:35.357920 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:35.371911 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:35.388107 2153523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:35.496871 2153523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:35.635989 2153523 docker.go:233] disabling docker service ...
	I0819 13:20:35.636061 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:35.651531 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:35.665374 2153523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:35.810969 2153523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:35.045992 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Start
	I0819 13:20:35.046198 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring networks are active...
	I0819 13:20:35.046943 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network default is active
	I0819 13:20:35.047261 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network mk-no-preload-441627 is active
	I0819 13:20:35.047846 2151869 main.go:141] libmachine: (no-preload-441627) Getting domain xml...
	I0819 13:20:35.048719 2151869 main.go:141] libmachine: (no-preload-441627) Creating domain...
	I0819 13:20:36.329569 2151869 main.go:141] libmachine: (no-preload-441627) Waiting to get IP...
	I0819 13:20:36.330479 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.330862 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.330953 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.330842 2154491 retry.go:31] will retry after 214.344142ms: waiting for machine to come up
	I0819 13:20:35.931109 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:35.945746 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:35.963978 2153523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:35.964062 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.973860 2153523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:35.973944 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.983833 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.993733 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.003724 2153523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:36.014327 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.024944 2153523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.043960 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.054444 2153523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:36.064239 2153523 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:36.064325 2153523 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:36.082508 2153523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:36.092677 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:36.216592 2153523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:36.368516 2153523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:36.368599 2153523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:36.373579 2153523 start.go:563] Will wait 60s for crictl version
	I0819 13:20:36.373630 2153523 ssh_runner.go:195] Run: which crictl
	I0819 13:20:36.377433 2153523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:36.416842 2153523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:36.416933 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.445307 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.480522 2153523 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:33.845335 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.847273 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:36.135939 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.136635 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.635748 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.136679 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.636322 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.136560 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.635772 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.135797 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.635764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.481770 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:36.485210 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485665 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:36.485697 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485956 2153523 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:36.490383 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:36.504020 2153523 kubeadm.go:883] updating cluster {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:36.504151 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:36.504236 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:36.540784 2153523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:36.540871 2153523 ssh_runner.go:195] Run: which lz4
	I0819 13:20:36.545693 2153523 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:36.550278 2153523 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:36.550307 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:20:38.008380 2153523 crio.go:462] duration metric: took 1.462725002s to copy over tarball
	I0819 13:20:38.008468 2153523 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:40.267095 2153523 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258591648s)
	I0819 13:20:40.267126 2153523 crio.go:469] duration metric: took 2.258713555s to extract the tarball
	I0819 13:20:40.267133 2153523 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:40.307080 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:40.350343 2153523 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:40.350368 2153523 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:40.350377 2153523 kubeadm.go:934] updating node { 192.168.61.50 8443 v1.31.0 crio true true} ...
	I0819 13:20:40.350504 2153523 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-823470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:40.350594 2153523 ssh_runner.go:195] Run: crio config
	I0819 13:20:40.397534 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:40.397560 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:40.397582 2153523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:40.397612 2153523 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.50 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-823470 NodeName:embed-certs-823470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:40.397804 2153523 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-823470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:40.397880 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:40.409664 2153523 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:40.409738 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:40.420526 2153523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:20:40.437016 2153523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:40.454222 2153523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 13:20:40.471532 2153523 ssh_runner.go:195] Run: grep 192.168.61.50	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:40.475436 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:40.487570 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:40.612612 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:40.630016 2153523 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470 for IP: 192.168.61.50
	I0819 13:20:40.630049 2153523 certs.go:194] generating shared ca certs ...
	I0819 13:20:40.630073 2153523 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:40.630265 2153523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:40.630329 2153523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:40.630343 2153523 certs.go:256] generating profile certs ...
	I0819 13:20:40.630457 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/client.key
	I0819 13:20:40.630553 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key.208ee791
	I0819 13:20:40.630606 2153523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key
	I0819 13:20:40.630756 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:40.630812 2153523 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:40.630824 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:40.630859 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:40.630891 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:40.630915 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:40.630959 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:40.631651 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:40.660149 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:40.699120 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:40.726336 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:40.765233 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 13:20:40.804970 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 13:20:40.846737 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:40.871358 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:40.896688 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:40.920071 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:36.547335 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.547819 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.547855 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.547769 2154491 retry.go:31] will retry after 344.710336ms: waiting for machine to come up
	I0819 13:20:36.894655 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.895213 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.895249 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.895169 2154491 retry.go:31] will retry after 301.307077ms: waiting for machine to come up
	I0819 13:20:37.197664 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.198250 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.198286 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.198182 2154491 retry.go:31] will retry after 516.927721ms: waiting for machine to come up
	I0819 13:20:37.716831 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.717323 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.717359 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.717279 2154491 retry.go:31] will retry after 679.982986ms: waiting for machine to come up
	I0819 13:20:38.398678 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:38.399120 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:38.399157 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:38.399072 2154491 retry.go:31] will retry after 822.30179ms: waiting for machine to come up
	I0819 13:20:39.223045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:39.223575 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:39.223609 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:39.223519 2154491 retry.go:31] will retry after 936.040478ms: waiting for machine to come up
	I0819 13:20:40.161047 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:40.161593 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:40.161624 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:40.161538 2154491 retry.go:31] will retry after 1.301988289s: waiting for machine to come up
	I0819 13:20:38.347577 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:40.845179 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:42.845598 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:41.136326 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:41.635949 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.136274 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.636124 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.135998 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.636101 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.136242 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.635713 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.135796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.635702 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.943162 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:40.969648 2153523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:40.988367 2153523 ssh_runner.go:195] Run: openssl version
	I0819 13:20:40.994353 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:41.006057 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010833 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010908 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.016665 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:41.029047 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:41.040384 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044690 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044755 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.050358 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:41.061626 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:41.073480 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077867 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077941 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.083685 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:41.096051 2153523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:41.100589 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:41.106791 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:41.112757 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:41.119083 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:41.125060 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:41.130528 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:41.136615 2153523 kubeadm.go:392] StartCluster: {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:41.136781 2153523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:41.136848 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.184735 2153523 cri.go:89] found id: ""
	I0819 13:20:41.184820 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:41.195134 2153523 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:41.195162 2153523 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:41.195215 2153523 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:41.205105 2153523 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:41.206197 2153523 kubeconfig.go:125] found "embed-certs-823470" server: "https://192.168.61.50:8443"
	I0819 13:20:41.208434 2153523 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:41.218028 2153523 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.50
	I0819 13:20:41.218059 2153523 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:41.218073 2153523 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:41.218128 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.256254 2153523 cri.go:89] found id: ""
	I0819 13:20:41.256344 2153523 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:41.273223 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:41.283149 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:41.283169 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:41.283228 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:41.292541 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:41.292624 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:41.302453 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:41.312032 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:41.312090 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:41.322411 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.331738 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:41.331817 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.342148 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:41.353410 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:41.353506 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:41.365035 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:41.378556 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:41.502298 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.511293 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.008951961s)
	I0819 13:20:42.511332 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.728318 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.806475 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.889810 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:42.889917 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.390790 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.890775 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.939414 2153523 api_server.go:72] duration metric: took 1.049613567s to wait for apiserver process to appear ...
	I0819 13:20:43.939456 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:43.939495 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:43.940151 2153523 api_server.go:269] stopped: https://192.168.61.50:8443/healthz: Get "https://192.168.61.50:8443/healthz": dial tcp 192.168.61.50:8443: connect: connection refused
	I0819 13:20:44.439965 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:41.465077 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:41.465533 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:41.465577 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:41.465512 2154491 retry.go:31] will retry after 1.34269663s: waiting for machine to come up
	I0819 13:20:42.810485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:42.810988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:42.811017 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:42.810935 2154491 retry.go:31] will retry after 2.118455014s: waiting for machine to come up
	I0819 13:20:44.930721 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:44.931183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:44.931225 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:44.931151 2154491 retry.go:31] will retry after 2.224941461s: waiting for machine to come up
	I0819 13:20:44.846237 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:47.344303 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:46.957397 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.957427 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:46.957459 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:46.997895 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.997932 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:47.440580 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.444945 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.444984 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:47.939553 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.946543 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.946576 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.440206 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.446075 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.446114 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.939602 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.944480 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.944526 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.440053 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.444461 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.444502 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.940036 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.944425 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.944464 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:50.440006 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:50.445299 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:20:50.451900 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:50.451930 2153523 api_server.go:131] duration metric: took 6.512455813s to wait for apiserver health ...
	I0819 13:20:50.451941 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:50.451948 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:50.453835 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:46.135706 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:46.635815 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.136558 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.636410 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.136300 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.636143 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.136463 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.635682 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.135692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.635841 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.454977 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:50.466694 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:50.484966 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:50.507697 2153523 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:50.507738 2153523 system_pods.go:61] "coredns-6f6b679f8f-rxjvv" [7eae7e32-b4c3-4941-9241-16cdde1007fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:50.507750 2153523 system_pods.go:61] "etcd-embed-certs-823470" [e2684e0e-8c4a-40da-ae5b-aa7f3a077e45] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:50.507761 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [c438fbda-c1f4-49f8-8979-1194d91cdeab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:50.507770 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [528e4b3e-b3fd-4455-bb4f-79b4bbdcb434] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:50.507778 2153523 system_pods.go:61] "kube-proxy-jcnwd" [b5302230-969d-409e-8aba-98ff971e9e79] Running
	I0819 13:20:50.507787 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [213067c8-73c6-4eb8-b9dc-f4ea981421f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:50.507798 2153523 system_pods.go:61] "metrics-server-6867b74b74-8dwpf" [7e3ab377-075e-48d8-a9ce-191ba41e3ed7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:50.507812 2153523 system_pods.go:61] "storage-provisioner" [4dd080d9-174f-4c0c-9b06-39e98a71672b] Running
	I0819 13:20:50.507821 2153523 system_pods.go:74] duration metric: took 22.833545ms to wait for pod list to return data ...
	I0819 13:20:50.507831 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:50.512822 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:50.512850 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:50.512862 2153523 node_conditions.go:105] duration metric: took 5.023718ms to run NodePressure ...
	I0819 13:20:50.512877 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:50.853595 2153523 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859172 2153523 kubeadm.go:739] kubelet initialised
	I0819 13:20:50.859200 2153523 kubeadm.go:740] duration metric: took 5.576224ms waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859211 2153523 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:50.865571 2153523 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.871631 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871652 2153523 pod_ready.go:82] duration metric: took 6.059239ms for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.871660 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871666 2153523 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.875242 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875268 2153523 pod_ready.go:82] duration metric: took 3.594665ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.875279 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875287 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.879429 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879460 2153523 pod_ready.go:82] duration metric: took 4.156713ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.879471 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879485 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.887625 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887647 2153523 pod_ready.go:82] duration metric: took 8.149859ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.887655 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887662 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:47.157196 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:47.157589 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:47.157616 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:47.157532 2154491 retry.go:31] will retry after 3.412547504s: waiting for machine to come up
	I0819 13:20:50.571896 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:50.572460 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:50.572485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:50.572401 2154491 retry.go:31] will retry after 3.498419112s: waiting for machine to come up
	I0819 13:20:49.345181 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.346188 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.136307 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.136425 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.636034 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.136443 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.636129 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.135695 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.635860 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.136481 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.635655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.287880 2153523 pod_ready.go:93] pod "kube-proxy-jcnwd" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:51.287908 2153523 pod_ready.go:82] duration metric: took 400.238614ms for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:51.287922 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:53.294631 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.803650 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:54.073302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073795 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has current primary IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073819 2151869 main.go:141] libmachine: (no-preload-441627) Found IP for machine: 192.168.50.161
	I0819 13:20:54.073833 2151869 main.go:141] libmachine: (no-preload-441627) Reserving static IP address...
	I0819 13:20:54.074302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.074337 2151869 main.go:141] libmachine: (no-preload-441627) Reserved static IP address: 192.168.50.161
	I0819 13:20:54.074356 2151869 main.go:141] libmachine: (no-preload-441627) DBG | skip adding static IP to network mk-no-preload-441627 - found existing host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"}
	I0819 13:20:54.074382 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Getting to WaitForSSH function...
	I0819 13:20:54.074398 2151869 main.go:141] libmachine: (no-preload-441627) Waiting for SSH to be available...
	I0819 13:20:54.076312 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076686 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.076719 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076879 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH client type: external
	I0819 13:20:54.076920 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa (-rw-------)
	I0819 13:20:54.076961 2151869 main.go:141] libmachine: (no-preload-441627) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:54.076975 2151869 main.go:141] libmachine: (no-preload-441627) DBG | About to run SSH command:
	I0819 13:20:54.076988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | exit 0
	I0819 13:20:54.208334 2151869 main.go:141] libmachine: (no-preload-441627) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:54.208710 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetConfigRaw
	I0819 13:20:54.209373 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.211827 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212176 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.212210 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212409 2151869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/config.json ...
	I0819 13:20:54.212587 2151869 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:54.212607 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:54.212810 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.214965 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215290 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.215315 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215435 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.215615 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215735 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215887 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.216050 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.216244 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.216256 2151869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:54.332379 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:54.332411 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332718 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:20:54.332753 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332948 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.335520 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.335930 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.335959 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.336119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.336357 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336513 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336654 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.336918 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.337084 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.337104 2151869 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-441627 && echo "no-preload-441627" | sudo tee /etc/hostname
	I0819 13:20:54.466127 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-441627
	
	I0819 13:20:54.466178 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.469045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469434 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.469482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469648 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.469836 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470018 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470174 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.470406 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.470654 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.470676 2151869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-441627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-441627/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-441627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:54.595063 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:54.595111 2151869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:54.595162 2151869 buildroot.go:174] setting up certificates
	I0819 13:20:54.595178 2151869 provision.go:84] configureAuth start
	I0819 13:20:54.595191 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.595554 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.598703 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599031 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.599060 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599202 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.601785 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602203 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.602231 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602593 2151869 provision.go:143] copyHostCerts
	I0819 13:20:54.602661 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:54.602680 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:54.602745 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:54.602878 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:54.602892 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:54.602922 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:54.603010 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:54.603022 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:54.603043 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:54.603116 2151869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.no-preload-441627 san=[127.0.0.1 192.168.50.161 localhost minikube no-preload-441627]
	I0819 13:20:55.027338 2151869 provision.go:177] copyRemoteCerts
	I0819 13:20:55.027408 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:55.027438 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.030073 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.030550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030826 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.031038 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.031231 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.031436 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.122187 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:55.147222 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:55.170341 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:55.195484 2151869 provision.go:87] duration metric: took 600.289644ms to configureAuth
	I0819 13:20:55.195516 2151869 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:55.195757 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:55.195862 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.198859 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199267 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.199294 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199448 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.199666 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199834 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199941 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.200126 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.200340 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.200357 2151869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:55.485890 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:55.485920 2151869 machine.go:96] duration metric: took 1.273319956s to provisionDockerMachine
	I0819 13:20:55.485931 2151869 start.go:293] postStartSetup for "no-preload-441627" (driver="kvm2")
	I0819 13:20:55.485941 2151869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:55.485957 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.486340 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:55.486379 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.489099 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489514 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.489550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489657 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.489866 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.490073 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.490298 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.582461 2151869 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:55.587827 2151869 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:55.587857 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:55.587928 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:55.588015 2151869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:55.588115 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:55.600309 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:55.629229 2151869 start.go:296] duration metric: took 143.280186ms for postStartSetup
	I0819 13:20:55.629281 2151869 fix.go:56] duration metric: took 20.608007215s for fixHost
	I0819 13:20:55.629306 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.632299 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632733 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.632786 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632921 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.633155 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633422 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633640 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.633850 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.634098 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.634117 2151869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:55.752998 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073655.727816487
	
	I0819 13:20:55.753028 2151869 fix.go:216] guest clock: 1724073655.727816487
	I0819 13:20:55.753039 2151869 fix.go:229] Guest: 2024-08-19 13:20:55.727816487 +0000 UTC Remote: 2024-08-19 13:20:55.629285878 +0000 UTC m=+354.230914724 (delta=98.530609ms)
	I0819 13:20:55.753080 2151869 fix.go:200] guest clock delta is within tolerance: 98.530609ms
	I0819 13:20:55.753087 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 20.731851858s
	I0819 13:20:55.753119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.753381 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:55.756443 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.756954 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.756986 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.757124 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757639 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757822 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757927 2151869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:55.757999 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.758066 2151869 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:55.758098 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.760779 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761023 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761162 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761338 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761459 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761515 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761756 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.761845 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761895 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.762041 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.762192 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.846241 2151869 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:55.868048 2151869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:56.013592 2151869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:56.020322 2151869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:56.020403 2151869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:56.045197 2151869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:56.045234 2151869 start.go:495] detecting cgroup driver to use...
	I0819 13:20:56.045325 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:56.062336 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:56.078399 2151869 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:56.078476 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:56.094735 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:56.110097 2151869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:56.238433 2151869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:56.405540 2151869 docker.go:233] disabling docker service ...
	I0819 13:20:56.405638 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:56.421604 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:56.434166 2151869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:56.565116 2151869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:56.688471 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:56.709293 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:56.727575 2151869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:56.727658 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.738492 2151869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:56.738559 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.748563 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.758372 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.768341 2151869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:56.778351 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.788072 2151869 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.805215 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.815496 2151869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:56.825033 2151869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:56.825087 2151869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:56.839732 2151869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:56.851166 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:56.976815 2151869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:57.154020 2151869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:57.154114 2151869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:57.159214 2151869 start.go:563] Will wait 60s for crictl version
	I0819 13:20:57.159282 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.163467 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:57.209305 2151869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:57.209417 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.239241 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.269905 2151869 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:53.844042 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.845906 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:56.135714 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:56.635996 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.135847 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.635970 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.136650 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.636316 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.136506 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.636057 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.136545 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.636030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.297660 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:57.297685 2153523 pod_ready.go:82] duration metric: took 6.009754619s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:57.297696 2153523 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:59.304626 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:57.270865 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:57.273851 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274158 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:57.274186 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274425 2151869 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:57.278843 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:57.296818 2151869 kubeadm.go:883] updating cluster {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:57.296988 2151869 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:57.297042 2151869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:57.345101 2151869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:57.345130 2151869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:57.345185 2151869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.345236 2151869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.345463 2151869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.345553 2151869 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.345586 2151869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.345482 2151869 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 13:20:57.345492 2151869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.345496 2151869 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.347612 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.347625 2151869 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.347661 2151869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.347698 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.347797 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.348298 2151869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.348418 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.348485 2151869 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 13:20:57.521291 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.528953 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.530354 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.538749 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.539434 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.564916 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 13:20:57.572522 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.596418 2151869 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 13:20:57.596483 2151869 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.596536 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.632436 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 13:20:57.690627 2151869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.690666 2151869 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 13:20:57.690725 2151869 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.690677 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690776 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 13:20:57.690827 2151869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.690856 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.711488 2151869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 13:20:57.711546 2151869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.711597 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842702 2151869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 13:20:57.842794 2151869 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 13:20:57.842819 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.842815 2151869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.842845 2151869 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.842917 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842923 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842933 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.842769 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.863031 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.959879 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.959965 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.960047 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.960061 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.960110 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.960298 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.994432 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.109438 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.124785 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:58.124845 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:58.126704 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:58.126791 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:58.126801 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:58.184360 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.198177 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.280314 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 13:20:58.280424 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.280471 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 13:20:58.280579 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:20:58.291181 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 13:20:58.291246 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 13:20:58.291278 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:20:58.291295 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 13:20:58.291333 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:20:58.291370 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:20:58.329433 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 13:20:58.329497 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 13:20:58.329553 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:20:58.329588 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 13:20:58.329597 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:20:58.329602 2151869 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329640 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329660 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 13:20:58.329679 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 13:20:58.329704 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 13:20:58.329747 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 13:20:58.333814 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 13:21:00.287189 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.957515939s)
	I0819 13:21:00.287247 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 13:21:00.287270 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287208 2151869 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.957596318s)
	I0819 13:21:00.287323 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287327 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 13:20:58.346708 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:00.844688 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.845156 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:01.136475 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.136128 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.636458 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.136471 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.636080 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.135958 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.636494 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.136279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.636642 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.803995 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:03.804110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:05.805252 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.037910 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.750558372s)
	I0819 13:21:02.037948 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 13:21:02.037977 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:02.038031 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:04.606941 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.568877079s)
	I0819 13:21:04.606974 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 13:21:04.607003 2151869 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:04.607057 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:05.346854 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.108643 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:06.136247 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:06.636648 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.136212 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.636710 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.136112 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.636655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.136349 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.635778 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.136689 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.635931 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.305329 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:10.805529 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.376612 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.769523227s)
	I0819 13:21:08.376661 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 13:21:08.376690 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:08.376749 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:10.337756 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.960979615s)
	I0819 13:21:10.337805 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 13:21:10.337835 2151869 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.337888 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.982386 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 13:21:10.982434 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.982483 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.345178 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.345444 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:11.135937 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:11.636273 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.136428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.635907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.135893 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.636189 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.136600 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.636483 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.135856 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.635957 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.303597 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:15.804612 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.334483 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351971561s)
	I0819 13:21:12.334524 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 13:21:12.334561 2151869 cache_images.go:123] Successfully loaded all cached images
	I0819 13:21:12.334568 2151869 cache_images.go:92] duration metric: took 14.989422621s to LoadCachedImages
	I0819 13:21:12.334580 2151869 kubeadm.go:934] updating node { 192.168.50.161 8443 v1.31.0 crio true true} ...
	I0819 13:21:12.334746 2151869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-441627 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:21:12.334860 2151869 ssh_runner.go:195] Run: crio config
	I0819 13:21:12.396639 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:12.396662 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:12.396674 2151869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:21:12.396703 2151869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.161 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-441627 NodeName:no-preload-441627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:21:12.396877 2151869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-441627"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:21:12.396958 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:21:12.408527 2151869 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:21:12.408601 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:21:12.418596 2151869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:21:12.435620 2151869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:21:12.452829 2151869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 13:21:12.470756 2151869 ssh_runner.go:195] Run: grep 192.168.50.161	control-plane.minikube.internal$ /etc/hosts
	I0819 13:21:12.474802 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:21:12.486795 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:21:12.612626 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:21:12.639005 2151869 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627 for IP: 192.168.50.161
	I0819 13:21:12.639027 2151869 certs.go:194] generating shared ca certs ...
	I0819 13:21:12.639051 2151869 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:21:12.639231 2151869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:21:12.639296 2151869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:21:12.639310 2151869 certs.go:256] generating profile certs ...
	I0819 13:21:12.639441 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/client.key
	I0819 13:21:12.639535 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key.72b03406
	I0819 13:21:12.639591 2151869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key
	I0819 13:21:12.639752 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:21:12.639811 2151869 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:21:12.639822 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:21:12.639849 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:21:12.639874 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:21:12.639899 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:21:12.639944 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:21:12.640741 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:21:12.674749 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:21:12.700466 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:21:12.726553 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:21:12.753171 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 13:21:12.779864 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:21:12.817715 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:21:12.841844 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:21:12.867122 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:21:12.896767 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:21:12.921671 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:21:12.948478 2151869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:21:12.967603 2151869 ssh_runner.go:195] Run: openssl version
	I0819 13:21:12.973848 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:21:12.984584 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989193 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989248 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.995006 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:21:13.005510 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:21:13.017919 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022858 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022918 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.029040 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:21:13.039654 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:21:13.049968 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054446 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054502 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.060941 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:21:13.071954 2151869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:21:13.076513 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:21:13.082624 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:21:13.088523 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:21:13.094597 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:21:13.100589 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:21:13.106674 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:21:13.112637 2151869 kubeadm.go:392] StartCluster: {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:21:13.112759 2151869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:21:13.112804 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.153143 2151869 cri.go:89] found id: ""
	I0819 13:21:13.153230 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:21:13.164365 2151869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:21:13.164389 2151869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:21:13.164444 2151869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:21:13.174609 2151869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:21:13.175874 2151869 kubeconfig.go:125] found "no-preload-441627" server: "https://192.168.50.161:8443"
	I0819 13:21:13.178228 2151869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:21:13.188495 2151869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.161
	I0819 13:21:13.188536 2151869 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:21:13.188562 2151869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:21:13.188623 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.225547 2151869 cri.go:89] found id: ""
	I0819 13:21:13.225635 2151869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:21:13.242484 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:21:13.251983 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:21:13.252021 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:21:13.252085 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:21:13.260912 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:21:13.260984 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:21:13.270535 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:21:13.279500 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:21:13.279549 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:21:13.288906 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.297861 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:21:13.297914 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.307468 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:21:13.316801 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:21:13.316859 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:21:13.325756 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:21:13.336403 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:13.448374 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.613650 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.165234261s)
	I0819 13:21:14.613681 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.841371 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.904812 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:15.017425 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:21:15.017541 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.518613 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.018036 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.061238 2151869 api_server.go:72] duration metric: took 1.043828546s to wait for apiserver process to appear ...
	I0819 13:21:16.061271 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:21:16.061301 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:14.846330 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:17.344355 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:19.219563 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:21:19.219607 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:21:19.219627 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.350613 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.350647 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:19.562010 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.567205 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.567236 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.062225 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.085272 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:20.085311 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.561434 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.565983 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:21:20.574754 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:21:20.574779 2151869 api_server.go:131] duration metric: took 4.513500352s to wait for apiserver health ...
	I0819 13:21:20.574790 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:20.574797 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:20.576255 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:21:16.136030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.636437 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.135718 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.635876 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.136668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.636021 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.135933 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.636638 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.135892 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.636450 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.804980 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.306777 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.577369 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:21:20.603169 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:21:20.642872 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:21:20.657328 2151869 system_pods.go:59] 8 kube-system pods found
	I0819 13:21:20.657366 2151869 system_pods.go:61] "coredns-6f6b679f8f-4w764" [486b4748-9a70-47b1-b386-e5369ebc2a96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:21:20.657373 2151869 system_pods.go:61] "etcd-no-preload-441627" [8013da47-1ecb-47de-a82c-7b72529b68fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:21:20.657383 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [72190b64-c312-48c0-8494-27e1ad39d877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:21:20.657389 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [60773a38-952f-4f53-a3f3-e54acdf7d196] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:21:20.657395 2151869 system_pods.go:61] "kube-proxy-v76kp" [ac6ad27b-5ee4-4899-8744-82537285a195] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:21:20.657400 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [78dde452-d8d5-498d-a6b1-459e6cb9ac16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:21:20.657405 2151869 system_pods.go:61] "metrics-server-6867b74b74-hd777" [2ce49c7d-af76-459a-82f5-aaa8d2b97c60] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:21:20.657410 2151869 system_pods.go:61] "storage-provisioner" [b67a7c83-46d3-4d75-bfa2-ae944b0c0ab8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:21:20.657417 2151869 system_pods.go:74] duration metric: took 14.518946ms to wait for pod list to return data ...
	I0819 13:21:20.657424 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:21:20.661337 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:21:20.661362 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:21:20.661373 2151869 node_conditions.go:105] duration metric: took 3.944656ms to run NodePressure ...
	I0819 13:21:20.661435 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:20.951411 2151869 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955734 2151869 kubeadm.go:739] kubelet initialised
	I0819 13:21:20.955755 2151869 kubeadm.go:740] duration metric: took 4.317107ms waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955763 2151869 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:21:20.961038 2151869 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.965450 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965473 2151869 pod_ready.go:82] duration metric: took 4.406672ms for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.965481 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965488 2151869 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.969586 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969610 2151869 pod_ready.go:82] duration metric: took 4.112423ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.969620 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969626 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.973542 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973566 2151869 pod_ready.go:82] duration metric: took 3.932252ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.973575 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973583 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.046350 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046390 2151869 pod_ready.go:82] duration metric: took 72.796575ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:21.046405 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046414 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:19.346285 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.346824 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.136149 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:21.635766 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.136148 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.135942 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.636570 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.135982 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.635967 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.136750 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.635977 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:25.636070 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:25.675163 2152780 cri.go:89] found id: ""
	I0819 13:21:25.675196 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.675208 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:25.675216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:25.675294 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:25.711676 2152780 cri.go:89] found id: ""
	I0819 13:21:25.711702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.711711 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:25.711717 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:25.711777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:25.748448 2152780 cri.go:89] found id: ""
	I0819 13:21:25.748479 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.748490 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:25.748499 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:25.748574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:25.786555 2152780 cri.go:89] found id: ""
	I0819 13:21:25.786584 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.786597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:25.786606 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:25.786661 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:22.804004 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:24.804708 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.447200 2151869 pod_ready.go:93] pod "kube-proxy-v76kp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:21.447230 2151869 pod_ready.go:82] duration metric: took 400.803851ms for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.447241 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:23.452136 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.454487 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:23.845593 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:26.345971 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.819620 2152780 cri.go:89] found id: ""
	I0819 13:21:25.819654 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.819665 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:25.819672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:25.819734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:25.853303 2152780 cri.go:89] found id: ""
	I0819 13:21:25.853333 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.853342 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:25.853349 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:25.853401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:25.886285 2152780 cri.go:89] found id: ""
	I0819 13:21:25.886315 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.886325 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:25.886332 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:25.886389 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:25.921320 2152780 cri.go:89] found id: ""
	I0819 13:21:25.921347 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.921356 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:25.921367 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:25.921377 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:25.975400 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:25.975486 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:25.989715 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:25.989741 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:26.115378 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:26.115402 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:26.115415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:26.192370 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:26.192415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:28.739640 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:28.752652 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:28.752729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:28.791883 2152780 cri.go:89] found id: ""
	I0819 13:21:28.791910 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.791918 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:28.791925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:28.791977 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:28.829635 2152780 cri.go:89] found id: ""
	I0819 13:21:28.829668 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.829678 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:28.829686 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:28.829756 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:28.866768 2152780 cri.go:89] found id: ""
	I0819 13:21:28.866804 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.866817 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:28.866825 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:28.866897 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:28.901921 2152780 cri.go:89] found id: ""
	I0819 13:21:28.901954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.901963 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:28.901970 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:28.902025 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:28.945421 2152780 cri.go:89] found id: ""
	I0819 13:21:28.945449 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.945458 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:28.945468 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:28.945535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:28.982857 2152780 cri.go:89] found id: ""
	I0819 13:21:28.982893 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.982905 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:28.982918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:28.982978 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:29.020378 2152780 cri.go:89] found id: ""
	I0819 13:21:29.020415 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.020427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:29.020436 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:29.020502 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:29.055782 2152780 cri.go:89] found id: ""
	I0819 13:21:29.055817 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.055829 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:29.055841 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:29.055864 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:29.107722 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:29.107761 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:29.122304 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:29.122364 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:29.193379 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:29.193415 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:29.193435 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:29.271539 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:29.271578 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:27.304409 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.306462 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:27.952426 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.954179 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:28.346305 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:30.845538 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:31.819829 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:31.832354 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:31.832427 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:31.873620 2152780 cri.go:89] found id: ""
	I0819 13:21:31.873647 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.873656 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:31.873662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:31.873722 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:31.905857 2152780 cri.go:89] found id: ""
	I0819 13:21:31.905891 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.905903 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:31.905911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:31.905993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:31.942187 2152780 cri.go:89] found id: ""
	I0819 13:21:31.942215 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.942227 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:31.942235 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:31.942307 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:31.981439 2152780 cri.go:89] found id: ""
	I0819 13:21:31.981475 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.981486 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:31.981494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:31.981569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:32.019863 2152780 cri.go:89] found id: ""
	I0819 13:21:32.019900 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.019922 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:32.019934 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:32.020000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:32.055007 2152780 cri.go:89] found id: ""
	I0819 13:21:32.055035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.055044 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:32.055050 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:32.055117 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:32.092770 2152780 cri.go:89] found id: ""
	I0819 13:21:32.092809 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.092818 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:32.092824 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:32.092880 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:32.127000 2152780 cri.go:89] found id: ""
	I0819 13:21:32.127035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.127043 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:32.127053 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:32.127067 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:32.179979 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:32.180026 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:32.195377 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:32.195412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:32.269864 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:32.269887 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:32.269902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:32.348035 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:32.348072 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:34.886647 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:34.900410 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:34.900483 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:34.938558 2152780 cri.go:89] found id: ""
	I0819 13:21:34.938588 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.938596 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:34.938603 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:34.938660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:34.975968 2152780 cri.go:89] found id: ""
	I0819 13:21:34.976001 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.976012 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:34.976020 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:34.976105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:35.008967 2152780 cri.go:89] found id: ""
	I0819 13:21:35.008995 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.009003 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:35.009010 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:35.009079 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:35.045918 2152780 cri.go:89] found id: ""
	I0819 13:21:35.045957 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.045966 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:35.045973 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:35.046038 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:35.090605 2152780 cri.go:89] found id: ""
	I0819 13:21:35.090640 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.090652 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:35.090667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:35.090740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:35.152032 2152780 cri.go:89] found id: ""
	I0819 13:21:35.152066 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.152079 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:35.152088 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:35.152182 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:35.197900 2152780 cri.go:89] found id: ""
	I0819 13:21:35.197932 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.197947 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:35.197956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:35.198026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:35.233818 2152780 cri.go:89] found id: ""
	I0819 13:21:35.233853 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.233867 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:35.233884 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:35.233902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:35.316457 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:35.316501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:35.356449 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:35.356480 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:35.409307 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:35.409347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:35.423776 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:35.423807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:35.500868 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:31.803587 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:33.803736 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.805282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:32.452623 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:34.454347 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.454772 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:35.454809 2151869 pod_ready.go:82] duration metric: took 14.007559429s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:35.454826 2151869 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:33.345607 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.347672 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.844185 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:38.001981 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:38.014908 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:38.014991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:38.051110 2152780 cri.go:89] found id: ""
	I0819 13:21:38.051141 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.051154 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:38.051162 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:38.051236 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:38.087476 2152780 cri.go:89] found id: ""
	I0819 13:21:38.087502 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.087511 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:38.087517 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:38.087575 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:38.122929 2152780 cri.go:89] found id: ""
	I0819 13:21:38.122961 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.122972 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:38.122980 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:38.123050 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:38.157352 2152780 cri.go:89] found id: ""
	I0819 13:21:38.157382 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.157394 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:38.157403 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:38.157468 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:38.194914 2152780 cri.go:89] found id: ""
	I0819 13:21:38.194953 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.194968 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:38.194974 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:38.195028 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:38.234036 2152780 cri.go:89] found id: ""
	I0819 13:21:38.234069 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.234078 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:38.234087 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:38.234155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:38.275759 2152780 cri.go:89] found id: ""
	I0819 13:21:38.275787 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.275798 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:38.275806 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:38.275867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:38.312718 2152780 cri.go:89] found id: ""
	I0819 13:21:38.312749 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.312759 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:38.312770 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:38.312785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:38.352032 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:38.352057 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:38.401877 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:38.401912 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:38.415899 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:38.415929 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:38.495576 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:38.495596 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:38.495609 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:38.304885 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.804828 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.461711 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:39.960883 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.343811 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.344371 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:41.073796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:41.095862 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:41.095935 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:41.155760 2152780 cri.go:89] found id: ""
	I0819 13:21:41.155790 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.155820 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:41.155828 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:41.155899 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:41.200108 2152780 cri.go:89] found id: ""
	I0819 13:21:41.200147 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.200177 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:41.200189 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:41.200263 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:41.233462 2152780 cri.go:89] found id: ""
	I0819 13:21:41.233494 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.233506 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:41.233514 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:41.233585 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:41.267089 2152780 cri.go:89] found id: ""
	I0819 13:21:41.267120 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.267133 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:41.267141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:41.267212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:41.302391 2152780 cri.go:89] found id: ""
	I0819 13:21:41.302423 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.302434 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:41.302443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:41.302509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:41.337812 2152780 cri.go:89] found id: ""
	I0819 13:21:41.337843 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.337855 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:41.337864 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:41.337939 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:41.373924 2152780 cri.go:89] found id: ""
	I0819 13:21:41.373954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.373961 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:41.373968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:41.374023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:41.407905 2152780 cri.go:89] found id: ""
	I0819 13:21:41.407950 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.407961 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:41.407974 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:41.407996 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:41.493650 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:41.493693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:41.532411 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:41.532456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:41.584613 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:41.584650 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:41.598337 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:41.598369 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:41.672154 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.172662 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:44.185663 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:44.185730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:44.220825 2152780 cri.go:89] found id: ""
	I0819 13:21:44.220863 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.220876 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:44.220885 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:44.220955 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:44.257261 2152780 cri.go:89] found id: ""
	I0819 13:21:44.257292 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.257300 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:44.257307 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:44.257369 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:44.292652 2152780 cri.go:89] found id: ""
	I0819 13:21:44.292690 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.292701 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:44.292709 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:44.292806 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:44.330064 2152780 cri.go:89] found id: ""
	I0819 13:21:44.330096 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.330108 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:44.330117 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:44.330184 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:44.366863 2152780 cri.go:89] found id: ""
	I0819 13:21:44.366895 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.366906 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:44.366915 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:44.366976 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:44.400335 2152780 cri.go:89] found id: ""
	I0819 13:21:44.400372 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.400385 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:44.400394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:44.400458 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:44.434381 2152780 cri.go:89] found id: ""
	I0819 13:21:44.434411 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.434422 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:44.434430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:44.434498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:44.470985 2152780 cri.go:89] found id: ""
	I0819 13:21:44.471013 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.471024 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:44.471036 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:44.471048 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:44.521552 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:44.521593 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:44.536197 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:44.536230 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:44.613040 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.613063 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:44.613077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:44.694600 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:44.694655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:42.807847 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:45.304916 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.461974 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.464590 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.843926 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.845069 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:47.231770 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:47.245480 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:47.245555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:47.282302 2152780 cri.go:89] found id: ""
	I0819 13:21:47.282334 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.282343 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:47.282351 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:47.282424 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:47.317791 2152780 cri.go:89] found id: ""
	I0819 13:21:47.317828 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.317839 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:47.317847 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:47.317914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:47.355999 2152780 cri.go:89] found id: ""
	I0819 13:21:47.356033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.356044 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:47.356052 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:47.356113 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:47.394652 2152780 cri.go:89] found id: ""
	I0819 13:21:47.394689 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.394702 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:47.394711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:47.394769 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:47.428563 2152780 cri.go:89] found id: ""
	I0819 13:21:47.428599 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.428612 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:47.428621 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:47.428698 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:47.462669 2152780 cri.go:89] found id: ""
	I0819 13:21:47.462702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.462712 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:47.462719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:47.462789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:47.496322 2152780 cri.go:89] found id: ""
	I0819 13:21:47.496349 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.496357 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:47.496366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:47.496419 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:47.527997 2152780 cri.go:89] found id: ""
	I0819 13:21:47.528033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.528045 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:47.528058 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:47.528077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:47.541514 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:47.541542 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:47.612804 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:47.612833 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:47.612851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.692815 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:47.692857 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:47.731057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:47.731092 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.286289 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:50.299376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:50.299457 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:50.335093 2152780 cri.go:89] found id: ""
	I0819 13:21:50.335123 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.335132 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:50.335138 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:50.335212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:50.369353 2152780 cri.go:89] found id: ""
	I0819 13:21:50.369378 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.369386 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:50.369394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:50.369459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:50.404319 2152780 cri.go:89] found id: ""
	I0819 13:21:50.404350 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.404360 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:50.404366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:50.404421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:50.441033 2152780 cri.go:89] found id: ""
	I0819 13:21:50.441064 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.441073 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:50.441080 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:50.441136 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:50.477438 2152780 cri.go:89] found id: ""
	I0819 13:21:50.477463 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.477471 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:50.477485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:50.477541 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:50.511785 2152780 cri.go:89] found id: ""
	I0819 13:21:50.511825 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.511838 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:50.511848 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:50.511925 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:50.545454 2152780 cri.go:89] found id: ""
	I0819 13:21:50.545487 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.545496 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:50.545502 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:50.545563 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:50.582022 2152780 cri.go:89] found id: ""
	I0819 13:21:50.582053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.582061 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:50.582071 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:50.582086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:50.625353 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:50.625384 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.688814 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:50.688851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:50.704225 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:50.704254 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:50.782736 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:50.782771 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:50.782789 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.804084 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.804223 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.961189 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:48.962466 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.344808 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.347615 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.365075 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:53.380049 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:53.380135 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:53.418911 2152780 cri.go:89] found id: ""
	I0819 13:21:53.418944 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.418956 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:53.418964 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:53.419020 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:53.458952 2152780 cri.go:89] found id: ""
	I0819 13:21:53.458986 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.458997 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:53.459005 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:53.459080 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:53.497219 2152780 cri.go:89] found id: ""
	I0819 13:21:53.497254 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.497265 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:53.497273 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:53.497360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:53.533624 2152780 cri.go:89] found id: ""
	I0819 13:21:53.533658 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.533671 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:53.533682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:53.533761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:53.567453 2152780 cri.go:89] found id: ""
	I0819 13:21:53.567490 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.567501 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:53.567509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:53.567574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:53.603438 2152780 cri.go:89] found id: ""
	I0819 13:21:53.603472 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.603482 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:53.603490 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:53.603570 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:53.637608 2152780 cri.go:89] found id: ""
	I0819 13:21:53.637648 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.637659 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:53.637667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:53.637742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:53.684476 2152780 cri.go:89] found id: ""
	I0819 13:21:53.684514 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.684524 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:53.684534 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:53.684550 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:53.699195 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:53.699227 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:53.766980 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:53.767000 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:53.767013 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:53.844890 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:53.844924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:53.888284 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:53.888318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:51.805011 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.805710 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.460867 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.462350 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.962100 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.845091 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.845602 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:57.848944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:56.442668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:56.455936 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:56.456019 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:56.494363 2152780 cri.go:89] found id: ""
	I0819 13:21:56.494403 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.494414 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:56.494423 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:56.494498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:56.531901 2152780 cri.go:89] found id: ""
	I0819 13:21:56.531933 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.531943 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:56.531950 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:56.532046 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:56.569677 2152780 cri.go:89] found id: ""
	I0819 13:21:56.569706 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.569715 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:56.569721 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:56.569787 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:56.608109 2152780 cri.go:89] found id: ""
	I0819 13:21:56.608136 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.608147 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:56.608153 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:56.608240 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:56.646753 2152780 cri.go:89] found id: ""
	I0819 13:21:56.646791 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.646803 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:56.646810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:56.646879 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:56.686029 2152780 cri.go:89] found id: ""
	I0819 13:21:56.686053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.686061 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:56.686067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:56.686131 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:56.724838 2152780 cri.go:89] found id: ""
	I0819 13:21:56.724874 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.724886 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:56.724895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:56.724963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:56.762368 2152780 cri.go:89] found id: ""
	I0819 13:21:56.762399 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.762407 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:56.762422 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:56.762434 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:56.815047 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:56.815103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:56.830783 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:56.830809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:56.908963 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:56.908990 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:56.909008 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:56.991337 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:56.991372 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:59.532005 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:59.545733 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:59.545823 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:59.584082 2152780 cri.go:89] found id: ""
	I0819 13:21:59.584111 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.584121 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:59.584130 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:59.584216 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:59.618041 2152780 cri.go:89] found id: ""
	I0819 13:21:59.618075 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.618086 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:59.618094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:59.618171 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:59.651720 2152780 cri.go:89] found id: ""
	I0819 13:21:59.651753 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.651764 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:59.651772 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:59.651844 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:59.686447 2152780 cri.go:89] found id: ""
	I0819 13:21:59.686471 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.686479 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:59.686486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:59.686539 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:59.719583 2152780 cri.go:89] found id: ""
	I0819 13:21:59.719623 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.719635 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:59.719643 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:59.719714 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:59.753559 2152780 cri.go:89] found id: ""
	I0819 13:21:59.753593 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.753604 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:59.753613 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:59.753684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:59.789518 2152780 cri.go:89] found id: ""
	I0819 13:21:59.789554 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.789566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:59.789575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:59.789647 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:59.825772 2152780 cri.go:89] found id: ""
	I0819 13:21:59.825801 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.825812 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:59.825823 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:59.825837 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:59.878857 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:59.878899 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:59.894113 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:59.894146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:59.975274 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:59.975301 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:59.975318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:00.056121 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:00.056185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:56.304106 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.804232 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.805308 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.461819 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.961828 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.345176 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.345756 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.596366 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:02.610365 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:02.610439 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:02.646703 2152780 cri.go:89] found id: ""
	I0819 13:22:02.646729 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.646737 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:02.646743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:02.646800 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:02.682311 2152780 cri.go:89] found id: ""
	I0819 13:22:02.682345 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.682357 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:02.682364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:02.682441 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:02.718494 2152780 cri.go:89] found id: ""
	I0819 13:22:02.718525 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.718545 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:02.718551 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:02.718608 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.751952 2152780 cri.go:89] found id: ""
	I0819 13:22:02.751988 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.751999 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:02.752008 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:02.752074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:02.786156 2152780 cri.go:89] found id: ""
	I0819 13:22:02.786189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.786197 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:02.786204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:02.786261 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:02.823080 2152780 cri.go:89] found id: ""
	I0819 13:22:02.823114 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.823126 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:02.823134 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:02.823207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:02.860266 2152780 cri.go:89] found id: ""
	I0819 13:22:02.860299 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.860311 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:02.860320 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:02.860377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:02.903612 2152780 cri.go:89] found id: ""
	I0819 13:22:02.903643 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.903652 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:02.903667 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:02.903680 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:02.984771 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:02.984813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:03.027346 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:03.027378 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:03.077562 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:03.077608 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:03.092135 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:03.092190 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:03.168418 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:05.669468 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:05.682640 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:05.682706 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:05.717755 2152780 cri.go:89] found id: ""
	I0819 13:22:05.717785 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.717796 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:05.717805 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:05.717867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:05.757165 2152780 cri.go:89] found id: ""
	I0819 13:22:05.757191 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.757199 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:05.757204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:05.757265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:05.794991 2152780 cri.go:89] found id: ""
	I0819 13:22:05.795023 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.795035 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:05.795044 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:05.795124 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.805919 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.304915 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.963297 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.459786 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:04.844594 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.344680 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.829634 2152780 cri.go:89] found id: ""
	I0819 13:22:05.829664 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.829673 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:05.829680 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:05.829734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:05.865739 2152780 cri.go:89] found id: ""
	I0819 13:22:05.865767 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.865775 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:05.865781 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:05.865839 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:05.901395 2152780 cri.go:89] found id: ""
	I0819 13:22:05.901425 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.901435 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:05.901443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:05.901641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:05.937162 2152780 cri.go:89] found id: ""
	I0819 13:22:05.937190 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.937200 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:05.937207 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:05.937284 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:05.974302 2152780 cri.go:89] found id: ""
	I0819 13:22:05.974332 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.974342 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:05.974353 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:05.974367 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:06.012635 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:06.012664 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:06.064093 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:06.064129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:06.077792 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:06.077825 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:06.148954 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:06.148980 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:06.148999 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:08.731539 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:08.747311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:08.747414 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:08.782881 2152780 cri.go:89] found id: ""
	I0819 13:22:08.782913 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.782921 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:08.782927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:08.782990 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:08.817533 2152780 cri.go:89] found id: ""
	I0819 13:22:08.817569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.817580 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:08.817587 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:08.817657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:08.855614 2152780 cri.go:89] found id: ""
	I0819 13:22:08.855650 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.855664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:08.855672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:08.855742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:08.896405 2152780 cri.go:89] found id: ""
	I0819 13:22:08.896439 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.896447 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:08.896455 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:08.896525 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:08.936250 2152780 cri.go:89] found id: ""
	I0819 13:22:08.936279 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.936288 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:08.936296 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:08.936366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:08.976325 2152780 cri.go:89] found id: ""
	I0819 13:22:08.976357 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.976367 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:08.976376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:08.976446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:09.012921 2152780 cri.go:89] found id: ""
	I0819 13:22:09.012953 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.012962 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:09.012967 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:09.013026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:09.051385 2152780 cri.go:89] found id: ""
	I0819 13:22:09.051420 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.051432 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:09.051444 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:09.051461 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:09.105101 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:09.105146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:09.120289 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:09.120331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:09.216182 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:09.216220 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:09.216238 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:09.302832 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:09.302876 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:07.307100 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.803522 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.461843 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.961972 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.345855 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.845869 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.848734 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:11.861754 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:11.861838 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:11.897908 2152780 cri.go:89] found id: ""
	I0819 13:22:11.897940 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.897949 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:11.897956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:11.898013 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:11.935276 2152780 cri.go:89] found id: ""
	I0819 13:22:11.935309 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.935318 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:11.935326 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:11.935386 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:11.978577 2152780 cri.go:89] found id: ""
	I0819 13:22:11.978608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.978619 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:11.978626 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:11.978697 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:12.018464 2152780 cri.go:89] found id: ""
	I0819 13:22:12.018496 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.018505 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:12.018511 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:12.018567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:12.062371 2152780 cri.go:89] found id: ""
	I0819 13:22:12.062402 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.062412 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:12.062418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:12.062476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:12.102988 2152780 cri.go:89] found id: ""
	I0819 13:22:12.103017 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.103026 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:12.103033 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:12.103088 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:12.143553 2152780 cri.go:89] found id: ""
	I0819 13:22:12.143581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.143591 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:12.143599 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:12.143660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:12.187155 2152780 cri.go:89] found id: ""
	I0819 13:22:12.187192 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.187211 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:12.187224 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:12.187251 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:12.236028 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:12.236069 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:12.250047 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:12.250084 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:12.320586 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:12.320612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:12.320624 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:12.400274 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:12.400320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:14.939735 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:14.953283 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:14.953348 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:14.993818 2152780 cri.go:89] found id: ""
	I0819 13:22:14.993851 2152780 logs.go:276] 0 containers: []
	W0819 13:22:14.993862 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:14.993870 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:14.993940 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:15.035966 2152780 cri.go:89] found id: ""
	I0819 13:22:15.035997 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.036005 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:15.036012 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:15.036073 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:15.070730 2152780 cri.go:89] found id: ""
	I0819 13:22:15.070765 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.070777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:15.070785 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:15.070855 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:15.111830 2152780 cri.go:89] found id: ""
	I0819 13:22:15.111859 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.111868 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:15.111874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:15.111928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:15.144013 2152780 cri.go:89] found id: ""
	I0819 13:22:15.144039 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.144047 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:15.144053 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:15.144107 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:15.178050 2152780 cri.go:89] found id: ""
	I0819 13:22:15.178074 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.178083 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:15.178090 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:15.178145 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:15.212622 2152780 cri.go:89] found id: ""
	I0819 13:22:15.212647 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.212656 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:15.212662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:15.212716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:15.244744 2152780 cri.go:89] found id: ""
	I0819 13:22:15.244772 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.244780 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:15.244789 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:15.244804 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:15.284232 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:15.284266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:15.336738 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:15.336774 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:15.350801 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:15.350835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:15.426424 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:15.426453 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:15.426469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:11.809160 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.303978 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.962746 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:13.964030 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.344847 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.344951 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.009087 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:18.022428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:18.022500 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:18.062900 2152780 cri.go:89] found id: ""
	I0819 13:22:18.062930 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.062941 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:18.062949 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:18.063021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:18.096711 2152780 cri.go:89] found id: ""
	I0819 13:22:18.096752 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.096764 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:18.096773 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:18.096846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:18.129199 2152780 cri.go:89] found id: ""
	I0819 13:22:18.129226 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.129246 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:18.129253 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:18.129323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:18.161907 2152780 cri.go:89] found id: ""
	I0819 13:22:18.161944 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.161955 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:18.161963 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:18.162036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:18.198514 2152780 cri.go:89] found id: ""
	I0819 13:22:18.198557 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.198570 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:18.198577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:18.198648 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:18.230114 2152780 cri.go:89] found id: ""
	I0819 13:22:18.230142 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.230152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:18.230160 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:18.230221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:18.265610 2152780 cri.go:89] found id: ""
	I0819 13:22:18.265642 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.265654 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:18.265660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:18.265729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:18.298889 2152780 cri.go:89] found id: ""
	I0819 13:22:18.298929 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.298941 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:18.298952 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:18.298967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:18.381124 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:18.381165 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:18.422361 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:18.422394 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:18.482182 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:18.482215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:18.496054 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:18.496086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:18.568459 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:16.305471 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.804592 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.461760 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.961000 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:20.961568 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.844928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.345851 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.069389 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:21.082568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:21.082636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:21.117124 2152780 cri.go:89] found id: ""
	I0819 13:22:21.117158 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.117171 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:21.117180 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:21.117272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:21.151954 2152780 cri.go:89] found id: ""
	I0819 13:22:21.151981 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.151990 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:21.151996 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:21.152047 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:21.187411 2152780 cri.go:89] found id: ""
	I0819 13:22:21.187443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.187452 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:21.187461 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:21.187528 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:21.224118 2152780 cri.go:89] found id: ""
	I0819 13:22:21.224148 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.224156 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:21.224173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:21.224247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:21.261153 2152780 cri.go:89] found id: ""
	I0819 13:22:21.261189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.261198 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:21.261205 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:21.261274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:21.305404 2152780 cri.go:89] found id: ""
	I0819 13:22:21.305433 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.305444 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:21.305453 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:21.305534 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:21.358428 2152780 cri.go:89] found id: ""
	I0819 13:22:21.358462 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.358474 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:21.358482 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:21.358561 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:21.410971 2152780 cri.go:89] found id: ""
	I0819 13:22:21.411005 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.411017 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:21.411029 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:21.411045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:21.462929 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:21.462963 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:21.476419 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:21.476446 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:21.544095 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.544116 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:21.544130 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:21.622802 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:21.622841 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.171988 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:24.185545 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:24.185624 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:24.221604 2152780 cri.go:89] found id: ""
	I0819 13:22:24.221632 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.221640 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:24.221646 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:24.221703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:24.255866 2152780 cri.go:89] found id: ""
	I0819 13:22:24.255903 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.255914 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:24.255923 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:24.255993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:24.290338 2152780 cri.go:89] found id: ""
	I0819 13:22:24.290381 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.290393 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:24.290401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:24.290465 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:24.325653 2152780 cri.go:89] found id: ""
	I0819 13:22:24.325685 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.325693 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:24.325700 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:24.325753 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:24.364081 2152780 cri.go:89] found id: ""
	I0819 13:22:24.364116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.364128 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:24.364136 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:24.364211 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:24.403464 2152780 cri.go:89] found id: ""
	I0819 13:22:24.403504 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.403516 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:24.403524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:24.403597 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:24.441205 2152780 cri.go:89] found id: ""
	I0819 13:22:24.441244 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.441256 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:24.441264 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:24.441340 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:24.478294 2152780 cri.go:89] found id: ""
	I0819 13:22:24.478324 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.478332 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:24.478341 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:24.478354 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:24.562969 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:24.563012 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.609296 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:24.609332 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:24.665537 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:24.665579 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:24.680086 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:24.680128 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:24.743465 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.305466 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.804623 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.461141 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.461784 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.844225 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.844485 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.844552 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.244137 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:27.257619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:27.257695 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:27.290229 2152780 cri.go:89] found id: ""
	I0819 13:22:27.290265 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.290274 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:27.290280 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:27.290354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:27.327185 2152780 cri.go:89] found id: ""
	I0819 13:22:27.327216 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.327225 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:27.327231 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:27.327302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:27.362949 2152780 cri.go:89] found id: ""
	I0819 13:22:27.362978 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.362999 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:27.363007 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:27.363074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:27.404085 2152780 cri.go:89] found id: ""
	I0819 13:22:27.404116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.404125 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:27.404132 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:27.404237 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:27.439884 2152780 cri.go:89] found id: ""
	I0819 13:22:27.439922 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.439933 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:27.439942 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:27.440006 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:27.473933 2152780 cri.go:89] found id: ""
	I0819 13:22:27.473961 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.473970 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:27.473976 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:27.474033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:27.510093 2152780 cri.go:89] found id: ""
	I0819 13:22:27.510121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.510131 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:27.510137 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:27.510188 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:27.549178 2152780 cri.go:89] found id: ""
	I0819 13:22:27.549205 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.549213 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:27.549223 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:27.549236 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:27.605886 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:27.605925 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:27.619506 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:27.619537 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:27.686821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:27.686849 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:27.686867 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:27.761275 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:27.761315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:30.301382 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:30.315140 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:30.315221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:30.352687 2152780 cri.go:89] found id: ""
	I0819 13:22:30.352710 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.352717 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:30.352723 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:30.352775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:30.388577 2152780 cri.go:89] found id: ""
	I0819 13:22:30.388604 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.388612 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:30.388618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:30.388671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:30.423937 2152780 cri.go:89] found id: ""
	I0819 13:22:30.423967 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.423977 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:30.423983 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:30.424039 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:30.456789 2152780 cri.go:89] found id: ""
	I0819 13:22:30.456823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.456834 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:30.456852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:30.456928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:30.495741 2152780 cri.go:89] found id: ""
	I0819 13:22:30.495778 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.495792 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:30.495801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:30.495856 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:30.532919 2152780 cri.go:89] found id: ""
	I0819 13:22:30.532947 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.532959 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:30.532972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:30.533036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:30.574556 2152780 cri.go:89] found id: ""
	I0819 13:22:30.574591 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.574602 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:30.574615 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:30.574685 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:30.610248 2152780 cri.go:89] found id: ""
	I0819 13:22:30.610280 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.610291 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:30.610303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:30.610319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:30.664302 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:30.664356 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:30.678529 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:30.678571 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:30.753412 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:30.753441 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:30.753460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:26.304403 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:28.305270 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.804282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.462817 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:29.463222 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.344268 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:32.344384 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.831754 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:30.831795 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:33.375305 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:33.390064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:33.390139 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:33.426250 2152780 cri.go:89] found id: ""
	I0819 13:22:33.426281 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.426292 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:33.426301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:33.426356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:33.464229 2152780 cri.go:89] found id: ""
	I0819 13:22:33.464255 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.464263 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:33.464270 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:33.464329 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:33.501098 2152780 cri.go:89] found id: ""
	I0819 13:22:33.501134 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.501145 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:33.501154 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:33.501223 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:33.535617 2152780 cri.go:89] found id: ""
	I0819 13:22:33.535651 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.535662 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:33.535670 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:33.535738 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:33.570553 2152780 cri.go:89] found id: ""
	I0819 13:22:33.570600 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.570613 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:33.570622 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:33.570694 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:33.605414 2152780 cri.go:89] found id: ""
	I0819 13:22:33.605454 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.605466 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:33.605477 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:33.605545 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:33.637564 2152780 cri.go:89] found id: ""
	I0819 13:22:33.637599 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.637610 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:33.637618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:33.637681 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:33.671403 2152780 cri.go:89] found id: ""
	I0819 13:22:33.671438 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.671450 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:33.671463 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:33.671482 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:33.724121 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:33.724159 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:33.737041 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:33.737070 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:33.808305 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:33.808329 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:33.808342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:33.886089 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:33.886136 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:32.806000 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:35.303503 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:31.463294 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:33.962798 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:34.344950 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.845547 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.424035 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:36.439329 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:36.439407 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:36.486348 2152780 cri.go:89] found id: ""
	I0819 13:22:36.486375 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.486383 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:36.486391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:36.486455 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:36.522796 2152780 cri.go:89] found id: ""
	I0819 13:22:36.522823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.522834 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:36.522843 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:36.522909 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:36.556528 2152780 cri.go:89] found id: ""
	I0819 13:22:36.556561 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.556573 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:36.556580 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:36.556652 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:36.592545 2152780 cri.go:89] found id: ""
	I0819 13:22:36.592581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.592591 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:36.592600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:36.592671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:36.628668 2152780 cri.go:89] found id: ""
	I0819 13:22:36.628696 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.628705 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:36.628711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:36.628765 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:36.660783 2152780 cri.go:89] found id: ""
	I0819 13:22:36.660817 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.660825 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:36.660832 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:36.660886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:36.694853 2152780 cri.go:89] found id: ""
	I0819 13:22:36.694883 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.694893 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:36.694901 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:36.694969 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:36.727767 2152780 cri.go:89] found id: ""
	I0819 13:22:36.727797 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.727805 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:36.727816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:36.727830 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:36.741032 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:36.741064 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:36.816015 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:36.816045 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:36.816062 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:36.896272 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:36.896319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:36.933895 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:36.933936 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:39.495808 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:39.511071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:39.511158 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:39.551412 2152780 cri.go:89] found id: ""
	I0819 13:22:39.551447 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.551458 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:39.551467 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:39.551523 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:39.588796 2152780 cri.go:89] found id: ""
	I0819 13:22:39.588827 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.588836 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:39.588842 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:39.588904 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:39.627115 2152780 cri.go:89] found id: ""
	I0819 13:22:39.627144 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.627153 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:39.627159 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:39.627222 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:39.662372 2152780 cri.go:89] found id: ""
	I0819 13:22:39.662405 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.662413 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:39.662420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:39.662478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:39.696239 2152780 cri.go:89] found id: ""
	I0819 13:22:39.696272 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.696282 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:39.696298 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:39.696363 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:39.731334 2152780 cri.go:89] found id: ""
	I0819 13:22:39.731372 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.731383 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:39.731391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:39.731454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:39.764368 2152780 cri.go:89] found id: ""
	I0819 13:22:39.764400 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.764412 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:39.764420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:39.764488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:39.798994 2152780 cri.go:89] found id: ""
	I0819 13:22:39.799026 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.799038 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:39.799050 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:39.799066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:39.812935 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:39.812965 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:39.882867 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:39.882898 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:39.882913 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:39.968067 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:39.968102 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:40.007122 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:40.007158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:37.304584 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.305417 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.462377 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:38.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.344771 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.355221 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:42.560811 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:42.574968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:42.575034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:42.609476 2152780 cri.go:89] found id: ""
	I0819 13:22:42.609516 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.609529 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:42.609537 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:42.609607 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:42.650763 2152780 cri.go:89] found id: ""
	I0819 13:22:42.650794 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.650804 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:42.650812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:42.650885 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:42.686330 2152780 cri.go:89] found id: ""
	I0819 13:22:42.686358 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.686369 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:42.686377 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:42.686446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:42.720937 2152780 cri.go:89] found id: ""
	I0819 13:22:42.720963 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.720971 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:42.720982 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:42.721036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:42.755079 2152780 cri.go:89] found id: ""
	I0819 13:22:42.755122 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.755132 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:42.755141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:42.755212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:42.787680 2152780 cri.go:89] found id: ""
	I0819 13:22:42.787715 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.787727 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:42.787738 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:42.787807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:42.824768 2152780 cri.go:89] found id: ""
	I0819 13:22:42.824799 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.824807 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:42.824813 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:42.824874 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:42.859902 2152780 cri.go:89] found id: ""
	I0819 13:22:42.859935 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.859946 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:42.859957 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:42.859972 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:42.911169 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:42.911215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:42.925108 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:42.925140 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:43.002506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:43.002557 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:43.002574 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:43.084408 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:43.084451 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:45.625360 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:45.638948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:45.639033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:45.675838 2152780 cri.go:89] found id: ""
	I0819 13:22:45.675874 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.675887 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:45.675895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:45.675971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:45.708726 2152780 cri.go:89] found id: ""
	I0819 13:22:45.708757 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.708768 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:45.708776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:45.708842 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:45.749704 2152780 cri.go:89] found id: ""
	I0819 13:22:45.749739 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.749752 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:45.749769 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:45.749835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:45.786426 2152780 cri.go:89] found id: ""
	I0819 13:22:45.786457 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.786469 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:45.786478 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:45.786547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:41.803829 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.804465 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.805120 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.462336 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.961582 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.845489 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.845943 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.822363 2152780 cri.go:89] found id: ""
	I0819 13:22:45.822395 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.822403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:45.822408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:45.822472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:45.856308 2152780 cri.go:89] found id: ""
	I0819 13:22:45.856337 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.856346 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:45.856353 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:45.856421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:45.893379 2152780 cri.go:89] found id: ""
	I0819 13:22:45.893417 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.893427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:45.893435 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:45.893503 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:45.929841 2152780 cri.go:89] found id: ""
	I0819 13:22:45.929868 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.929879 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:45.929890 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:45.929943 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:45.980392 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:45.980432 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:45.994781 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:45.994809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:46.065633 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:46.065658 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:46.065675 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:46.140938 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:46.140976 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.680055 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:48.692972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:48.693053 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:48.736198 2152780 cri.go:89] found id: ""
	I0819 13:22:48.736232 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.736244 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:48.736252 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:48.736332 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:48.775819 2152780 cri.go:89] found id: ""
	I0819 13:22:48.775855 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.775872 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:48.775881 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:48.775956 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:48.814090 2152780 cri.go:89] found id: ""
	I0819 13:22:48.814121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.814130 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:48.814139 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:48.814207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:48.849630 2152780 cri.go:89] found id: ""
	I0819 13:22:48.849656 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.849665 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:48.849671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:48.849730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:48.887945 2152780 cri.go:89] found id: ""
	I0819 13:22:48.887982 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.887995 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:48.888003 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:48.888072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:48.924105 2152780 cri.go:89] found id: ""
	I0819 13:22:48.924145 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.924158 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:48.924184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:48.924266 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:48.960481 2152780 cri.go:89] found id: ""
	I0819 13:22:48.960521 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.960532 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:48.960540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:48.960611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:48.998826 2152780 cri.go:89] found id: ""
	I0819 13:22:48.998856 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.998865 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:48.998874 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:48.998888 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:49.053064 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:49.053103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:49.067115 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:49.067154 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:49.146731 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:49.146753 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:49.146767 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:49.232228 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:49.232268 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.303393 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.305013 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:46.461905 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.961570 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.961743 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.344068 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.844551 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:51.775692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:51.789766 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:51.789848 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:51.830997 2152780 cri.go:89] found id: ""
	I0819 13:22:51.831029 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.831038 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:51.831045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:51.831105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:51.870122 2152780 cri.go:89] found id: ""
	I0819 13:22:51.870150 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.870160 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:51.870168 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:51.870247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:51.911170 2152780 cri.go:89] found id: ""
	I0819 13:22:51.911198 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.911206 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:51.911216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:51.911280 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:51.946159 2152780 cri.go:89] found id: ""
	I0819 13:22:51.946197 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.946218 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:51.946225 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:51.946300 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:51.981131 2152780 cri.go:89] found id: ""
	I0819 13:22:51.981160 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.981170 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:51.981176 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:51.981251 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:52.024417 2152780 cri.go:89] found id: ""
	I0819 13:22:52.024456 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.024477 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:52.024486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:52.024567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:52.062519 2152780 cri.go:89] found id: ""
	I0819 13:22:52.062552 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.062563 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:52.062571 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:52.062640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:52.106259 2152780 cri.go:89] found id: ""
	I0819 13:22:52.106291 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.106302 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:52.106314 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:52.106338 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:52.158681 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:52.158723 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:52.172489 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:52.172523 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:52.236862 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:52.236892 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:52.236910 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:52.319287 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:52.319331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:54.864061 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:54.878753 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:54.878834 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:54.913195 2152780 cri.go:89] found id: ""
	I0819 13:22:54.913225 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.913240 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:54.913247 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:54.913314 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:54.954867 2152780 cri.go:89] found id: ""
	I0819 13:22:54.954909 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.954922 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:54.954930 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:54.954995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:54.992570 2152780 cri.go:89] found id: ""
	I0819 13:22:54.992601 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.992610 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:54.992616 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:54.992683 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:55.027056 2152780 cri.go:89] found id: ""
	I0819 13:22:55.027090 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.027101 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:55.027108 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:55.027190 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:55.061941 2152780 cri.go:89] found id: ""
	I0819 13:22:55.061973 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.061985 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:55.061994 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:55.062061 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:55.100414 2152780 cri.go:89] found id: ""
	I0819 13:22:55.100443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.100455 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:55.100465 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:55.100535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:55.143797 2152780 cri.go:89] found id: ""
	I0819 13:22:55.143836 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.143848 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:55.143856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:55.143929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:55.187540 2152780 cri.go:89] found id: ""
	I0819 13:22:55.187569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.187577 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:55.187587 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:55.187600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:55.241075 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:55.241110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:55.254813 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:55.254846 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:55.325934 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:55.325955 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:55.325974 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:55.404725 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:55.404775 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:52.804423 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.303884 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:52.963783 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.461517 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:53.345242 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.346054 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.843535 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.949501 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:57.964664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:57.964730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:58.005114 2152780 cri.go:89] found id: ""
	I0819 13:22:58.005153 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.005164 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:58.005173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:58.005274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:58.038596 2152780 cri.go:89] found id: ""
	I0819 13:22:58.038629 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.038639 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:58.038644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:58.038700 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:58.081597 2152780 cri.go:89] found id: ""
	I0819 13:22:58.081631 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.081640 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:58.081650 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:58.081735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:58.120356 2152780 cri.go:89] found id: ""
	I0819 13:22:58.120389 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.120400 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:58.120408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:58.120478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:58.156313 2152780 cri.go:89] found id: ""
	I0819 13:22:58.156346 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.156358 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:58.156366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:58.156442 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:58.192198 2152780 cri.go:89] found id: ""
	I0819 13:22:58.192238 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.192249 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:58.192257 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:58.192335 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:58.226574 2152780 cri.go:89] found id: ""
	I0819 13:22:58.226608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.226622 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:58.226631 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:58.226705 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:58.260060 2152780 cri.go:89] found id: ""
	I0819 13:22:58.260088 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.260096 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:58.260106 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:58.260119 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:58.338242 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:58.338289 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:58.382245 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:58.382286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:58.436273 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:58.436314 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:58.452143 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:58.452215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:58.530133 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:57.305258 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.804142 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.462332 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.962480 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.844922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:01.030732 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:01.044083 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:01.044155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:01.083082 2152780 cri.go:89] found id: ""
	I0819 13:23:01.083114 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.083122 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:01.083129 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:01.083197 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:01.122645 2152780 cri.go:89] found id: ""
	I0819 13:23:01.122680 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.122700 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:01.122708 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:01.122777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:01.163144 2152780 cri.go:89] found id: ""
	I0819 13:23:01.163175 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.163187 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:01.163194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:01.163262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:01.201545 2152780 cri.go:89] found id: ""
	I0819 13:23:01.201587 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.201597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:01.201605 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:01.201693 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:01.237047 2152780 cri.go:89] found id: ""
	I0819 13:23:01.237075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.237086 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:01.237094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:01.237162 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:01.276531 2152780 cri.go:89] found id: ""
	I0819 13:23:01.276560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.276571 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:01.276577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:01.276633 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:01.315639 2152780 cri.go:89] found id: ""
	I0819 13:23:01.315668 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.315676 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:01.315682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:01.315734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:01.351942 2152780 cri.go:89] found id: ""
	I0819 13:23:01.351969 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.351980 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:01.351990 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:01.352006 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:01.403068 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:01.403111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:01.416608 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:01.416639 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:01.493450 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:01.493483 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:01.493501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:01.577709 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:01.577750 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:04.146865 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:04.160366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:04.160431 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:04.194463 2152780 cri.go:89] found id: ""
	I0819 13:23:04.194494 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.194504 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:04.194510 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:04.194567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:04.228327 2152780 cri.go:89] found id: ""
	I0819 13:23:04.228351 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.228359 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:04.228364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:04.228413 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:04.261613 2152780 cri.go:89] found id: ""
	I0819 13:23:04.261645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.261656 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:04.261664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:04.261740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:04.294583 2152780 cri.go:89] found id: ""
	I0819 13:23:04.294617 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.294628 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:04.294636 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:04.294712 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:04.328537 2152780 cri.go:89] found id: ""
	I0819 13:23:04.328566 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.328576 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:04.328585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:04.328657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:04.361865 2152780 cri.go:89] found id: ""
	I0819 13:23:04.361906 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.361917 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:04.361925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:04.361988 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:04.396308 2152780 cri.go:89] found id: ""
	I0819 13:23:04.396343 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.396354 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:04.396361 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:04.396433 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:04.429551 2152780 cri.go:89] found id: ""
	I0819 13:23:04.429580 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.429591 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:04.429602 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:04.429618 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:04.479557 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:04.479600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:04.493746 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:04.493781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:04.563309 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:04.563333 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:04.563347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:04.644413 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:04.644455 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:02.305047 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.804564 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.460986 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.461977 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.845028 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.345778 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.195678 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:07.209255 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:07.209323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:07.245267 2152780 cri.go:89] found id: ""
	I0819 13:23:07.245295 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.245306 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:07.245324 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:07.245394 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:07.283190 2152780 cri.go:89] found id: ""
	I0819 13:23:07.283223 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.283234 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:07.283242 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:07.283312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:07.330025 2152780 cri.go:89] found id: ""
	I0819 13:23:07.330053 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.330065 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:07.330072 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:07.330137 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:07.381649 2152780 cri.go:89] found id: ""
	I0819 13:23:07.381675 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.381684 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:07.381690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:07.381744 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:07.436759 2152780 cri.go:89] found id: ""
	I0819 13:23:07.436794 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.436806 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:07.436814 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:07.436890 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:07.475843 2152780 cri.go:89] found id: ""
	I0819 13:23:07.475876 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.475885 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:07.475891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:07.475954 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:07.510531 2152780 cri.go:89] found id: ""
	I0819 13:23:07.510560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.510569 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:07.510575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:07.510641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:07.546041 2152780 cri.go:89] found id: ""
	I0819 13:23:07.546075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.546087 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:07.546100 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:07.546118 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:07.559102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:07.559137 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:07.631002 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:07.631030 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:07.631045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:07.705711 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:07.705751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:07.745114 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:07.745148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.302515 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:10.316028 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:10.316108 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:10.357431 2152780 cri.go:89] found id: ""
	I0819 13:23:10.357468 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.357477 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:10.357483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:10.357546 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:10.393541 2152780 cri.go:89] found id: ""
	I0819 13:23:10.393568 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.393577 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:10.393583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:10.393651 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:10.429651 2152780 cri.go:89] found id: ""
	I0819 13:23:10.429684 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.429692 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:10.429698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:10.429752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:10.469626 2152780 cri.go:89] found id: ""
	I0819 13:23:10.469690 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.469701 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:10.469706 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:10.469761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:10.503015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.503048 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.503058 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:10.503065 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:10.503133 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:10.538127 2152780 cri.go:89] found id: ""
	I0819 13:23:10.538168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.538180 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:10.538188 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:10.538260 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:10.573015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.573050 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.573058 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:10.573064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:10.573134 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:10.608094 2152780 cri.go:89] found id: ""
	I0819 13:23:10.608125 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.608136 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:10.608150 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:10.608185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:10.688926 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:10.688967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:10.724554 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:10.724588 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.777619 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:10.777652 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:10.791106 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:10.791143 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:23:07.305198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.804759 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:06.961224 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.460095 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.843726 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.845072 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	W0819 13:23:10.865964 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.366960 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:13.381903 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:13.381973 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:13.420589 2152780 cri.go:89] found id: ""
	I0819 13:23:13.420620 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.420630 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:13.420637 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:13.420724 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:13.457809 2152780 cri.go:89] found id: ""
	I0819 13:23:13.457840 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.457850 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:13.457863 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:13.457927 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:13.496310 2152780 cri.go:89] found id: ""
	I0819 13:23:13.496347 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.496359 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:13.496368 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:13.496436 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:13.532057 2152780 cri.go:89] found id: ""
	I0819 13:23:13.532089 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.532097 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:13.532103 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:13.532183 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:13.575780 2152780 cri.go:89] found id: ""
	I0819 13:23:13.575814 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.575823 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:13.575829 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:13.575896 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:13.611484 2152780 cri.go:89] found id: ""
	I0819 13:23:13.611510 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.611520 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:13.611528 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:13.611594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:13.647456 2152780 cri.go:89] found id: ""
	I0819 13:23:13.647483 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.647491 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:13.647497 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:13.647569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:13.681181 2152780 cri.go:89] found id: ""
	I0819 13:23:13.681212 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.681224 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:13.681237 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:13.681266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:13.694680 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:13.694709 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:13.767546 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.767576 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:13.767597 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:13.851609 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:13.851649 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:13.893739 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:13.893778 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:12.304391 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.304840 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.962978 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.461299 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.344830 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.844720 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.446686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:16.459850 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:16.459928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:16.498671 2152780 cri.go:89] found id: ""
	I0819 13:23:16.498703 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.498712 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:16.498718 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:16.498775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:16.534655 2152780 cri.go:89] found id: ""
	I0819 13:23:16.534691 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.534704 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:16.534716 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:16.534781 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:16.571279 2152780 cri.go:89] found id: ""
	I0819 13:23:16.571317 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.571330 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:16.571337 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:16.571402 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:16.604843 2152780 cri.go:89] found id: ""
	I0819 13:23:16.604878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.604889 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:16.604897 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:16.604963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:16.641150 2152780 cri.go:89] found id: ""
	I0819 13:23:16.641181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.641190 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:16.641201 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:16.641272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:16.676737 2152780 cri.go:89] found id: ""
	I0819 13:23:16.676763 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.676773 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:16.676782 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:16.676847 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:16.709876 2152780 cri.go:89] found id: ""
	I0819 13:23:16.709908 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.709920 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:16.709928 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:16.709991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:16.745249 2152780 cri.go:89] found id: ""
	I0819 13:23:16.745277 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.745293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:16.745304 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:16.745320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:16.796364 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:16.796400 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:16.811859 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:16.811887 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:16.886118 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:16.886144 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:16.886158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.968304 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:16.968349 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.511448 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:19.523952 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:19.524023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:19.556661 2152780 cri.go:89] found id: ""
	I0819 13:23:19.556696 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.556708 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:19.556715 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:19.556784 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:19.595956 2152780 cri.go:89] found id: ""
	I0819 13:23:19.595986 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.595995 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:19.596001 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:19.596055 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:19.631993 2152780 cri.go:89] found id: ""
	I0819 13:23:19.632021 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.632030 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:19.632038 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:19.632126 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:19.667871 2152780 cri.go:89] found id: ""
	I0819 13:23:19.667924 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.667935 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:19.667944 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:19.668007 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:19.703585 2152780 cri.go:89] found id: ""
	I0819 13:23:19.703615 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.703624 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:19.703630 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:19.703684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:19.737220 2152780 cri.go:89] found id: ""
	I0819 13:23:19.737252 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.737263 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:19.737272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:19.737343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:19.775526 2152780 cri.go:89] found id: ""
	I0819 13:23:19.775570 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.775587 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:19.775595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:19.775671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:19.810611 2152780 cri.go:89] found id: ""
	I0819 13:23:19.810645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.810656 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:19.810667 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:19.810682 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.855138 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:19.855167 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:19.906988 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:19.907025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:19.920451 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:19.920481 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:19.989054 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:19.989086 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:19.989101 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.803686 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.804090 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.804422 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.463889 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.961085 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.961248 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.845050 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:21.344704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.571624 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:22.585585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:22.585667 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:22.620456 2152780 cri.go:89] found id: ""
	I0819 13:23:22.620488 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.620497 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:22.620509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:22.620562 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:22.658971 2152780 cri.go:89] found id: ""
	I0819 13:23:22.659005 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.659017 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:22.659024 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:22.659094 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:22.696236 2152780 cri.go:89] found id: ""
	I0819 13:23:22.696269 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.696281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:22.696290 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:22.696354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:22.735260 2152780 cri.go:89] found id: ""
	I0819 13:23:22.735296 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.735306 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:22.735313 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:22.735365 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.770593 2152780 cri.go:89] found id: ""
	I0819 13:23:22.770625 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.770637 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:22.770644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:22.770716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:22.806348 2152780 cri.go:89] found id: ""
	I0819 13:23:22.806375 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.806400 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:22.806418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:22.806489 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:22.843812 2152780 cri.go:89] found id: ""
	I0819 13:23:22.843841 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.843850 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:22.843858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:22.843929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:22.880607 2152780 cri.go:89] found id: ""
	I0819 13:23:22.880641 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.880655 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:22.880669 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:22.880695 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:22.930816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:22.930855 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:22.946795 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:22.946829 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:23.021065 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:23.021094 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:23.021110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:23.098083 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:23.098129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:25.641104 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:25.654226 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:25.654297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:25.689403 2152780 cri.go:89] found id: ""
	I0819 13:23:25.689429 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.689437 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:25.689445 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:25.689496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:25.725340 2152780 cri.go:89] found id: ""
	I0819 13:23:25.725380 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.725393 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:25.725401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:25.725472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:25.759133 2152780 cri.go:89] found id: ""
	I0819 13:23:25.759168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.759178 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:25.759184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:25.759253 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:25.793605 2152780 cri.go:89] found id: ""
	I0819 13:23:25.793636 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.793649 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:25.793657 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:25.793731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.804810 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.305119 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.460663 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:23.344790 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.344975 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.843772 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.828355 2152780 cri.go:89] found id: ""
	I0819 13:23:25.828391 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.828403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:25.828411 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:25.828488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:25.869621 2152780 cri.go:89] found id: ""
	I0819 13:23:25.869658 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.869669 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:25.869676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:25.869731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:25.907749 2152780 cri.go:89] found id: ""
	I0819 13:23:25.907786 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.907795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:25.907801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:25.907870 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:25.942443 2152780 cri.go:89] found id: ""
	I0819 13:23:25.942472 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.942484 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:25.942497 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:25.942515 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:26.016016 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:26.016044 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:26.016058 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:26.097427 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:26.097475 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:26.140275 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:26.140320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:26.189858 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:26.189894 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:28.705027 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:28.717999 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:28.718071 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:28.752233 2152780 cri.go:89] found id: ""
	I0819 13:23:28.752263 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.752279 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:28.752286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:28.752343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:28.786033 2152780 cri.go:89] found id: ""
	I0819 13:23:28.786071 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.786080 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:28.786086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:28.786143 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:28.821105 2152780 cri.go:89] found id: ""
	I0819 13:23:28.821135 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.821165 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:28.821173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:28.821243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:28.859179 2152780 cri.go:89] found id: ""
	I0819 13:23:28.859226 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.859237 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:28.859244 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:28.859319 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:28.894213 2152780 cri.go:89] found id: ""
	I0819 13:23:28.894250 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.894263 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:28.894272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:28.894351 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:28.927807 2152780 cri.go:89] found id: ""
	I0819 13:23:28.927836 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.927848 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:28.927856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:28.927924 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:28.965430 2152780 cri.go:89] found id: ""
	I0819 13:23:28.965464 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.965476 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:28.965485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:28.965555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:29.008894 2152780 cri.go:89] found id: ""
	I0819 13:23:29.008920 2152780 logs.go:276] 0 containers: []
	W0819 13:23:29.008928 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:29.008938 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:29.008951 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:29.089617 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:29.089644 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:29.089657 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:29.173271 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:29.173310 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:29.216057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:29.216098 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:29.267076 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:29.267131 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:27.804872 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:30.304909 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.461788 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.961548 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.844049 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.846007 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.782574 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:31.797583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:31.797655 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:31.836889 2152780 cri.go:89] found id: ""
	I0819 13:23:31.836926 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.836939 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:31.836948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:31.837021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:31.875693 2152780 cri.go:89] found id: ""
	I0819 13:23:31.875725 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.875734 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:31.875740 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:31.875813 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:31.909847 2152780 cri.go:89] found id: ""
	I0819 13:23:31.909878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.909887 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:31.909895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:31.909971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:31.944122 2152780 cri.go:89] found id: ""
	I0819 13:23:31.944155 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.944184 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:31.944193 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:31.944265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:31.979342 2152780 cri.go:89] found id: ""
	I0819 13:23:31.979373 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.979383 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:31.979390 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:31.979461 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:32.013001 2152780 cri.go:89] found id: ""
	I0819 13:23:32.013049 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.013060 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:32.013067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:32.013148 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:32.047975 2152780 cri.go:89] found id: ""
	I0819 13:23:32.048003 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.048012 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:32.048017 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:32.048074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:32.082151 2152780 cri.go:89] found id: ""
	I0819 13:23:32.082181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.082192 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:32.082205 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:32.082241 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:32.120254 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:32.120286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.171594 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:32.171634 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:32.185654 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:32.185687 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:32.251542 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:32.251568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:32.251586 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:34.831302 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:34.846902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:34.846980 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:34.883697 2152780 cri.go:89] found id: ""
	I0819 13:23:34.883731 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.883747 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:34.883756 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:34.883821 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:34.918184 2152780 cri.go:89] found id: ""
	I0819 13:23:34.918220 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.918232 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:34.918240 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:34.918321 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:34.951621 2152780 cri.go:89] found id: ""
	I0819 13:23:34.951653 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.951664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:34.951673 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:34.951735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:34.987330 2152780 cri.go:89] found id: ""
	I0819 13:23:34.987362 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.987373 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:34.987381 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:34.987454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:35.022480 2152780 cri.go:89] found id: ""
	I0819 13:23:35.022518 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.022531 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:35.022539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:35.022606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:35.059717 2152780 cri.go:89] found id: ""
	I0819 13:23:35.059753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.059769 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:35.059776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:35.059835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:35.098877 2152780 cri.go:89] found id: ""
	I0819 13:23:35.098912 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.098924 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:35.098932 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:35.098995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:35.137392 2152780 cri.go:89] found id: ""
	I0819 13:23:35.137422 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.137429 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:35.137438 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:35.137456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:35.151134 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:35.151177 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:35.216821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:35.216852 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:35.216870 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:35.300662 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:35.300717 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:35.344627 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:35.344661 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.803063 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.804202 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.962287 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.460360 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.344876 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.345734 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:37.899152 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:37.913091 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:37.913161 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:37.949684 2152780 cri.go:89] found id: ""
	I0819 13:23:37.949713 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.949722 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:37.949728 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:37.949791 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:37.984480 2152780 cri.go:89] found id: ""
	I0819 13:23:37.984509 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.984518 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:37.984524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:37.984591 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:38.022228 2152780 cri.go:89] found id: ""
	I0819 13:23:38.022264 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.022277 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:38.022285 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:38.022360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:38.056117 2152780 cri.go:89] found id: ""
	I0819 13:23:38.056147 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.056159 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:38.056184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:38.056252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:38.089157 2152780 cri.go:89] found id: ""
	I0819 13:23:38.089187 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.089204 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:38.089212 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:38.089279 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:38.123335 2152780 cri.go:89] found id: ""
	I0819 13:23:38.123369 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.123378 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:38.123384 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:38.123444 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:38.159555 2152780 cri.go:89] found id: ""
	I0819 13:23:38.159584 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.159596 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:38.159604 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:38.159671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:38.200728 2152780 cri.go:89] found id: ""
	I0819 13:23:38.200765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.200777 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:38.200789 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:38.200809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:38.251652 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:38.251684 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:38.264834 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:38.264862 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:38.341259 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:38.341287 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:38.341304 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:38.417381 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:38.417423 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:36.805360 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:39.303731 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.461458 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.962012 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.347922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.845195 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.959227 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:40.972845 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:40.972937 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:41.009180 2152780 cri.go:89] found id: ""
	I0819 13:23:41.009219 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.009233 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:41.009241 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:41.009313 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:41.044055 2152780 cri.go:89] found id: ""
	I0819 13:23:41.044088 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.044100 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:41.044109 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:41.044203 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:41.081073 2152780 cri.go:89] found id: ""
	I0819 13:23:41.081098 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.081108 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:41.081114 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:41.081166 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:41.113973 2152780 cri.go:89] found id: ""
	I0819 13:23:41.114011 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.114024 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:41.114031 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:41.114105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:41.149480 2152780 cri.go:89] found id: ""
	I0819 13:23:41.149515 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.149527 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:41.149539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:41.149632 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:41.183756 2152780 cri.go:89] found id: ""
	I0819 13:23:41.183788 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.183800 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:41.183808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:41.183884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:41.217629 2152780 cri.go:89] found id: ""
	I0819 13:23:41.217670 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.217683 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:41.217691 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:41.217755 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:41.254948 2152780 cri.go:89] found id: ""
	I0819 13:23:41.254987 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.255000 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:41.255011 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:41.255025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:41.293080 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:41.293116 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.349391 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:41.349437 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:41.363720 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:41.363751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:41.459273 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:41.459299 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:41.459315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.043239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:44.057981 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:44.058072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:44.091216 2152780 cri.go:89] found id: ""
	I0819 13:23:44.091247 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.091256 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:44.091263 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:44.091336 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:44.128729 2152780 cri.go:89] found id: ""
	I0819 13:23:44.128761 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.128772 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:44.128779 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:44.128865 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:44.167730 2152780 cri.go:89] found id: ""
	I0819 13:23:44.167765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.167777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:44.167784 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:44.167852 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:44.202323 2152780 cri.go:89] found id: ""
	I0819 13:23:44.202355 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.202366 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:44.202373 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:44.202451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:44.237030 2152780 cri.go:89] found id: ""
	I0819 13:23:44.237074 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.237088 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:44.237096 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:44.237164 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:44.273319 2152780 cri.go:89] found id: ""
	I0819 13:23:44.273356 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.273370 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:44.273378 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:44.273459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:44.313267 2152780 cri.go:89] found id: ""
	I0819 13:23:44.313300 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.313313 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:44.313322 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:44.313401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:44.353073 2152780 cri.go:89] found id: ""
	I0819 13:23:44.353099 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.353107 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:44.353115 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:44.353129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:44.366102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:44.366147 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:44.438580 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:44.438612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:44.438628 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.518200 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:44.518250 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:44.559362 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:44.559396 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.308823 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.804769 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:41.461933 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.961713 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.344606 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:45.344727 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.345821 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.114604 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:47.128989 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:47.129067 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:47.174286 2152780 cri.go:89] found id: ""
	I0819 13:23:47.174316 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.174324 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:47.174331 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:47.174391 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:47.208244 2152780 cri.go:89] found id: ""
	I0819 13:23:47.208282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.208291 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:47.208297 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:47.208360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:47.245609 2152780 cri.go:89] found id: ""
	I0819 13:23:47.245638 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.245645 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:47.245651 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:47.245703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:47.280617 2152780 cri.go:89] found id: ""
	I0819 13:23:47.280646 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.280655 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:47.280661 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:47.280726 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:47.317662 2152780 cri.go:89] found id: ""
	I0819 13:23:47.317695 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.317708 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:47.317719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:47.317789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:47.359113 2152780 cri.go:89] found id: ""
	I0819 13:23:47.359143 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.359152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:47.359158 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:47.359243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:47.396520 2152780 cri.go:89] found id: ""
	I0819 13:23:47.396550 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.396561 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:47.396568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:47.396636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:47.433043 2152780 cri.go:89] found id: ""
	I0819 13:23:47.433075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.433086 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:47.433097 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:47.433111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:47.491984 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:47.492023 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:47.505775 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:47.505805 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:47.573541 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:47.573568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:47.573584 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:47.657102 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:47.657148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.201855 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:50.216186 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:50.216250 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:50.254505 2152780 cri.go:89] found id: ""
	I0819 13:23:50.254542 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.254554 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:50.254561 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:50.254627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:50.288971 2152780 cri.go:89] found id: ""
	I0819 13:23:50.289006 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.289015 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:50.289021 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:50.289086 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:50.325243 2152780 cri.go:89] found id: ""
	I0819 13:23:50.325282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.325293 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:50.325302 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:50.325377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:50.361776 2152780 cri.go:89] found id: ""
	I0819 13:23:50.361810 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.361822 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:50.361830 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:50.361892 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:50.395380 2152780 cri.go:89] found id: ""
	I0819 13:23:50.395412 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.395422 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:50.395428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:50.395496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:50.429299 2152780 cri.go:89] found id: ""
	I0819 13:23:50.429336 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.429348 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:50.429356 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:50.429423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:50.471214 2152780 cri.go:89] found id: ""
	I0819 13:23:50.471236 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.471243 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:50.471249 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:50.471302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:50.510258 2152780 cri.go:89] found id: ""
	I0819 13:23:50.510298 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.510309 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:50.510322 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:50.510342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.547398 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:50.547431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:50.598425 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:50.598466 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:50.613308 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:50.613341 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:50.685803 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:50.685834 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:50.685851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:46.303155 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.305240 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.804319 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:46.461162 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.961356 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.961557 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:49.844368 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.345173 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:53.264007 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:53.277505 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:53.277596 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:53.320407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.320436 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.320444 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:53.320450 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:53.320508 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:53.364578 2152780 cri.go:89] found id: ""
	I0819 13:23:53.364628 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.364650 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:53.364660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:53.364732 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:53.404407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.404437 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.404446 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:53.404452 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:53.404522 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:53.440743 2152780 cri.go:89] found id: ""
	I0819 13:23:53.440775 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.440784 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:53.440791 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:53.440858 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:53.476151 2152780 cri.go:89] found id: ""
	I0819 13:23:53.476200 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.476211 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:53.476217 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:53.476297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:53.512531 2152780 cri.go:89] found id: ""
	I0819 13:23:53.512559 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.512567 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:53.512573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:53.512640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:53.562487 2152780 cri.go:89] found id: ""
	I0819 13:23:53.562524 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.562535 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:53.562543 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:53.562617 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:53.627507 2152780 cri.go:89] found id: ""
	I0819 13:23:53.627545 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.627556 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:53.627566 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:53.627587 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:53.686224 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:53.686270 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:53.700659 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:53.700693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:53.776506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:53.776544 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:53.776564 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:53.855375 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:53.855412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:53.305180 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.803826 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.962161 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:54.345531 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.845217 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.398105 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:56.411245 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:56.411312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:56.444097 2152780 cri.go:89] found id: ""
	I0819 13:23:56.444129 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.444138 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:56.444144 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:56.444214 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:56.480958 2152780 cri.go:89] found id: ""
	I0819 13:23:56.480993 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.481004 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:56.481013 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:56.481077 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:56.514657 2152780 cri.go:89] found id: ""
	I0819 13:23:56.514687 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.514696 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:56.514707 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:56.514762 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:56.548615 2152780 cri.go:89] found id: ""
	I0819 13:23:56.548647 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.548659 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:56.548667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:56.548735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:56.585801 2152780 cri.go:89] found id: ""
	I0819 13:23:56.585833 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.585844 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:56.585852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:56.585916 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:56.617804 2152780 cri.go:89] found id: ""
	I0819 13:23:56.617832 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.617843 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:56.617851 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:56.617919 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:56.655726 2152780 cri.go:89] found id: ""
	I0819 13:23:56.655753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.655761 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:56.655768 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:56.655820 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:56.691057 2152780 cri.go:89] found id: ""
	I0819 13:23:56.691087 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.691099 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:56.691111 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:56.691127 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:56.758836 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:56.758860 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:56.758879 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:56.836388 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:56.836427 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:56.875753 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:56.875781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:56.928126 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:56.928184 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.442721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:59.456581 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:59.456742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:59.492452 2152780 cri.go:89] found id: ""
	I0819 13:23:59.492480 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.492488 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:59.492494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:59.492544 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:59.527258 2152780 cri.go:89] found id: ""
	I0819 13:23:59.527291 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.527301 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:59.527311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:59.527366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:59.562712 2152780 cri.go:89] found id: ""
	I0819 13:23:59.562744 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.562757 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:59.562765 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:59.562836 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:59.597522 2152780 cri.go:89] found id: ""
	I0819 13:23:59.597557 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.597568 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:59.597577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:59.597649 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:59.638820 2152780 cri.go:89] found id: ""
	I0819 13:23:59.638852 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.638865 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:59.638873 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:59.638941 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:59.682623 2152780 cri.go:89] found id: ""
	I0819 13:23:59.682656 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.682667 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:59.682676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:59.682746 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:59.718731 2152780 cri.go:89] found id: ""
	I0819 13:23:59.718782 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.718795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:59.718808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:59.718881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:59.754799 2152780 cri.go:89] found id: ""
	I0819 13:23:59.754830 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.754840 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:59.754866 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:59.754890 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:59.807650 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:59.807683 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.821699 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:59.821727 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:59.901940 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:59.901962 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:59.901975 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:59.979019 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:59.979055 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:57.804198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:00.303807 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:57.960616 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.961190 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.344403 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:01.344800 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.519279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:02.532541 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:02.532606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:02.566287 2152780 cri.go:89] found id: ""
	I0819 13:24:02.566327 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.566338 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:02.566347 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:02.566423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:02.599874 2152780 cri.go:89] found id: ""
	I0819 13:24:02.599907 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.599920 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:02.599927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:02.600000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:02.633609 2152780 cri.go:89] found id: ""
	I0819 13:24:02.633642 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.633662 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:02.633671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:02.633741 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:02.669105 2152780 cri.go:89] found id: ""
	I0819 13:24:02.669133 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.669142 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:02.669151 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:02.669245 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:02.704719 2152780 cri.go:89] found id: ""
	I0819 13:24:02.704756 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.704769 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:02.704777 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:02.704846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:02.744694 2152780 cri.go:89] found id: ""
	I0819 13:24:02.744726 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.744736 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:02.744743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:02.744818 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.779911 2152780 cri.go:89] found id: ""
	I0819 13:24:02.779945 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.779959 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:02.779969 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:02.780034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:02.815235 2152780 cri.go:89] found id: ""
	I0819 13:24:02.815277 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.815293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:02.815303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:02.815319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:02.864822 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:02.864852 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:02.879626 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:02.879655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:02.951475 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:02.951504 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:02.951522 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:03.031154 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:03.031196 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.573907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:05.586737 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:05.586798 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:05.623912 2152780 cri.go:89] found id: ""
	I0819 13:24:05.623946 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.623957 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:05.623965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:05.624037 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:05.659399 2152780 cri.go:89] found id: ""
	I0819 13:24:05.659427 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.659435 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:05.659441 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:05.659494 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:05.693173 2152780 cri.go:89] found id: ""
	I0819 13:24:05.693207 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.693219 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:05.693228 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:05.693289 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:05.725641 2152780 cri.go:89] found id: ""
	I0819 13:24:05.725670 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.725680 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:05.725688 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:05.725752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:05.759559 2152780 cri.go:89] found id: ""
	I0819 13:24:05.759593 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.759606 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:05.759614 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:05.759686 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:05.793022 2152780 cri.go:89] found id: ""
	I0819 13:24:05.793051 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.793063 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:05.793071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:05.793147 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.304309 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.304491 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.461249 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.962079 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:03.344928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.845172 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.828997 2152780 cri.go:89] found id: ""
	I0819 13:24:05.829027 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.829037 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:05.829045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:05.829111 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:05.862852 2152780 cri.go:89] found id: ""
	I0819 13:24:05.862885 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.862895 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:05.862908 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:05.862924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.901295 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:05.901334 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:05.956106 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:05.956146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:05.971498 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:05.971525 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:06.040630 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:06.040653 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:06.040670 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:08.617393 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:08.631902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:08.631971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:08.669456 2152780 cri.go:89] found id: ""
	I0819 13:24:08.669512 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.669526 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:08.669539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:08.669630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:08.704825 2152780 cri.go:89] found id: ""
	I0819 13:24:08.704857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.704866 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:08.704874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:08.704933 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:08.738841 2152780 cri.go:89] found id: ""
	I0819 13:24:08.738879 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.738893 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:08.738902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:08.738961 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:08.775865 2152780 cri.go:89] found id: ""
	I0819 13:24:08.775899 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.775909 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:08.775918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:08.775981 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:08.813452 2152780 cri.go:89] found id: ""
	I0819 13:24:08.813482 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.813493 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:08.813501 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:08.813572 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:08.850638 2152780 cri.go:89] found id: ""
	I0819 13:24:08.850669 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.850681 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:08.850690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:08.850758 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:08.886750 2152780 cri.go:89] found id: ""
	I0819 13:24:08.886789 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.886801 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:08.886812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:08.886881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:08.922112 2152780 cri.go:89] found id: ""
	I0819 13:24:08.922145 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.922154 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:08.922164 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:08.922181 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:08.997014 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:08.997042 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:08.997059 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:09.074338 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:09.074388 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:09.114430 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:09.114460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:09.169581 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:09.169620 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:06.804761 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.809429 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:07.465846 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:09.961611 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.343646 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:10.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:12.844008 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.685453 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:11.698438 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:11.698509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:11.730814 2152780 cri.go:89] found id: ""
	I0819 13:24:11.730843 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.730852 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:11.730858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:11.730914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:11.768426 2152780 cri.go:89] found id: ""
	I0819 13:24:11.768460 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.768473 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:11.768483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:11.768555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:11.802718 2152780 cri.go:89] found id: ""
	I0819 13:24:11.802753 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.802765 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:11.802774 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:11.802841 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:11.838016 2152780 cri.go:89] found id: ""
	I0819 13:24:11.838058 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.838066 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:11.838073 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:11.838129 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:11.875869 2152780 cri.go:89] found id: ""
	I0819 13:24:11.875896 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.875904 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:11.875911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:11.875967 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:11.911538 2152780 cri.go:89] found id: ""
	I0819 13:24:11.911572 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.911580 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:11.911586 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:11.911640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:11.947762 2152780 cri.go:89] found id: ""
	I0819 13:24:11.947793 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.947803 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:11.947810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:11.947884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:11.989671 2152780 cri.go:89] found id: ""
	I0819 13:24:11.989706 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.989715 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:11.989726 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:11.989744 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:12.054707 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:12.054733 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:12.054749 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:12.137525 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:12.137572 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:12.175806 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:12.175834 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:12.227606 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:12.227651 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:14.743968 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:14.757178 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:14.757252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:14.791379 2152780 cri.go:89] found id: ""
	I0819 13:24:14.791409 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.791421 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:14.791430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:14.791510 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:14.826221 2152780 cri.go:89] found id: ""
	I0819 13:24:14.826250 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.826259 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:14.826265 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:14.826327 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:14.870239 2152780 cri.go:89] found id: ""
	I0819 13:24:14.870269 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.870281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:14.870345 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:14.870450 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:14.907473 2152780 cri.go:89] found id: ""
	I0819 13:24:14.907505 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.907515 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:14.907524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:14.907594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:14.942547 2152780 cri.go:89] found id: ""
	I0819 13:24:14.942585 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.942594 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:14.942600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:14.942658 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:14.982507 2152780 cri.go:89] found id: ""
	I0819 13:24:14.982536 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.982547 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:14.982555 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:14.982627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:15.020572 2152780 cri.go:89] found id: ""
	I0819 13:24:15.020604 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.020612 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:15.020619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:15.020691 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:15.057076 2152780 cri.go:89] found id: ""
	I0819 13:24:15.057105 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.057114 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:15.057123 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:15.057138 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:15.109111 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:15.109149 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:15.124049 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:15.124076 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:15.187413 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:15.187444 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:15.187460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:15.271009 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:15.271049 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:11.303722 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.804077 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:15.804497 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.961757 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.961949 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:14.844944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.344565 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.810205 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:17.823344 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:17.823417 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:17.862704 2152780 cri.go:89] found id: ""
	I0819 13:24:17.862734 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.862743 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:17.862750 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:17.862807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:17.900371 2152780 cri.go:89] found id: ""
	I0819 13:24:17.900404 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.900414 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:17.900420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:17.900476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:17.935518 2152780 cri.go:89] found id: ""
	I0819 13:24:17.935549 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.935559 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:17.935566 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:17.935622 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:17.972592 2152780 cri.go:89] found id: ""
	I0819 13:24:17.972624 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.972637 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:17.972647 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:17.972717 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:18.008636 2152780 cri.go:89] found id: ""
	I0819 13:24:18.008675 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.008686 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:18.008698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:18.008775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:18.040823 2152780 cri.go:89] found id: ""
	I0819 13:24:18.040857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.040866 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:18.040872 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:18.040934 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:18.079443 2152780 cri.go:89] found id: ""
	I0819 13:24:18.079472 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.079481 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:18.079488 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:18.079547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:18.118717 2152780 cri.go:89] found id: ""
	I0819 13:24:18.118746 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.118756 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:18.118766 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:18.118780 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:18.172588 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:18.172631 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:18.186427 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:18.186457 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:18.258534 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:18.258567 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:18.258585 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:18.335882 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:18.335922 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:17.804682 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.805544 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:16.461777 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:18.462005 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:20.960925 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.345299 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:21.843704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.844383 2152365 pod_ready.go:82] duration metric: took 4m0.006413479s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:22.844420 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 13:24:22.844431 2152365 pod_ready.go:39] duration metric: took 4m3.697261125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:22.844451 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:24:22.844496 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:22.844605 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:22.894620 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:22.894650 2152365 cri.go:89] found id: ""
	I0819 13:24:22.894661 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:22.894731 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.899300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:22.899365 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:22.934260 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:22.934285 2152365 cri.go:89] found id: ""
	I0819 13:24:22.934320 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:22.934380 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.939865 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:22.939927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:22.982433 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:22.982464 2152365 cri.go:89] found id: ""
	I0819 13:24:22.982475 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:22.982550 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.987038 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:22.987119 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:23.025231 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.025264 2152365 cri.go:89] found id: ""
	I0819 13:24:23.025275 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:23.025349 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.030567 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:23.030633 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:23.073663 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.073687 2152365 cri.go:89] found id: ""
	I0819 13:24:23.073695 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:23.073753 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.078563 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:23.078646 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:23.114597 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.114621 2152365 cri.go:89] found id: ""
	I0819 13:24:23.114629 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:23.114688 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.119022 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:23.119091 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:23.161263 2152365 cri.go:89] found id: ""
	I0819 13:24:23.161302 2152365 logs.go:276] 0 containers: []
	W0819 13:24:23.161311 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:23.161317 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:23.161373 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:23.211140 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.211167 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:23.211173 2152365 cri.go:89] found id: ""
	I0819 13:24:23.211182 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:23.211271 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.215875 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.220021 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:23.220046 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.283965 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:23.284004 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:20.878687 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:20.891965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:20.892042 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:20.926029 2152780 cri.go:89] found id: ""
	I0819 13:24:20.926057 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.926070 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:20.926077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:20.926142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:20.961025 2152780 cri.go:89] found id: ""
	I0819 13:24:20.961056 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.961067 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:20.961077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:20.961142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:20.993767 2152780 cri.go:89] found id: ""
	I0819 13:24:20.993800 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.993813 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:20.993821 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:20.993886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:21.025250 2152780 cri.go:89] found id: ""
	I0819 13:24:21.025285 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.025294 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:21.025301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:21.025356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:21.057832 2152780 cri.go:89] found id: ""
	I0819 13:24:21.057863 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.057871 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:21.057877 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:21.057930 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:21.092372 2152780 cri.go:89] found id: ""
	I0819 13:24:21.092400 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.092408 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:21.092419 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:21.092486 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:21.126138 2152780 cri.go:89] found id: ""
	I0819 13:24:21.126177 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.126187 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:21.126194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:21.126262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:21.157779 2152780 cri.go:89] found id: ""
	I0819 13:24:21.157806 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.157814 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:21.157823 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:21.157835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:21.240776 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:21.240813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:21.278705 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:21.278742 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:21.331086 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:21.331125 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:21.346399 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:21.346431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:21.424929 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:23.925325 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:23.938286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:23.938380 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:23.974038 2152780 cri.go:89] found id: ""
	I0819 13:24:23.974070 2152780 logs.go:276] 0 containers: []
	W0819 13:24:23.974079 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:23.974086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:23.974144 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:24.007828 2152780 cri.go:89] found id: ""
	I0819 13:24:24.007865 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.007876 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:24.007883 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:24.007951 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:24.042152 2152780 cri.go:89] found id: ""
	I0819 13:24:24.042182 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.042192 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:24.042198 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:24.042252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:24.076334 2152780 cri.go:89] found id: ""
	I0819 13:24:24.076366 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.076376 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:24.076382 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:24.076451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:24.116333 2152780 cri.go:89] found id: ""
	I0819 13:24:24.116368 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.116380 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:24.116388 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:24.116449 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:24.157126 2152780 cri.go:89] found id: ""
	I0819 13:24:24.157154 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.157163 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:24.157170 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:24.157224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:24.197124 2152780 cri.go:89] found id: ""
	I0819 13:24:24.197153 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.197161 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:24.197167 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:24.197224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:24.232662 2152780 cri.go:89] found id: ""
	I0819 13:24:24.232690 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.232698 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:24.232709 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:24.232721 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.310535 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.310573 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.351436 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:24.351469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:24.409022 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.409061 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.423392 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.423426 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:24.499638 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:22.305024 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:24.305228 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.961886 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:25.460596 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.000428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:27.016949 2152780 kubeadm.go:597] duration metric: took 4m2.883005064s to restartPrimaryControlPlane
	W0819 13:24:27.017029 2152780 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:27.017066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:27.478402 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:27.494351 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:24:27.505001 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:24:27.515444 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:24:27.515467 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:24:27.515523 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:24:27.524646 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:24:27.524697 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:24:27.535343 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:24:27.544402 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:24:27.544456 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:24:27.554655 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.564636 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:24:27.564709 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.575078 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:24:27.584856 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:24:27.584924 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:24:27.595099 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:24:27.675533 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:24:27.675638 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:24:27.827613 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:24:27.827770 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:24:27.827924 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:24:28.032127 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:24:23.323805 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:23.323834 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:23.400053 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:23.400107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:23.438155 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:23.438196 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.483737 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:23.483772 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:23.536000 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:23.536042 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.574211 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:23.574258 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.620141 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:23.620203 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.130515 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.130556 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.185924 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.185959 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.202306 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.202348 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:24.342635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:24.342675 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.895433 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:26.911884 2152365 api_server.go:72] duration metric: took 4m15.522363036s to wait for apiserver process to appear ...
	I0819 13:24:26.911914 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:24:26.911959 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:26.912026 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:26.949386 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.949423 2152365 cri.go:89] found id: ""
	I0819 13:24:26.949436 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:26.949509 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.954854 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:26.954937 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:26.993702 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:26.993732 2152365 cri.go:89] found id: ""
	I0819 13:24:26.993746 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:26.993806 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.997800 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:26.997873 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:27.044488 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:27.044521 2152365 cri.go:89] found id: ""
	I0819 13:24:27.044532 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:27.044612 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.049411 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:27.049488 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:27.098304 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.098328 2152365 cri.go:89] found id: ""
	I0819 13:24:27.098336 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:27.098388 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.102393 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:27.102448 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:27.138927 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.138952 2152365 cri.go:89] found id: ""
	I0819 13:24:27.138960 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:27.139019 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.143058 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:27.143113 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:27.183816 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:27.183846 2152365 cri.go:89] found id: ""
	I0819 13:24:27.183856 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:27.183918 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.188818 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:27.188880 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:27.231255 2152365 cri.go:89] found id: ""
	I0819 13:24:27.231286 2152365 logs.go:276] 0 containers: []
	W0819 13:24:27.231294 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:27.231300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:27.231352 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:27.265165 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.265189 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:27.265194 2152365 cri.go:89] found id: ""
	I0819 13:24:27.265204 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:27.265267 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.269294 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.273771 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:27.273796 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.315407 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:27.315446 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.359309 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:27.359343 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:27.817261 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:27.817316 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:27.834635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:27.834676 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:27.888137 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:27.888194 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:27.940367 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:27.940402 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.981404 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:27.981448 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:28.026666 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:28.026702 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:28.084387 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:28.084419 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:28.160615 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:28.160667 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:28.276547 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:28.276584 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:28.033928 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:24:28.034048 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:24:28.034129 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:24:28.034243 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:24:28.034320 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:24:28.034497 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:24:28.034852 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:24:28.035397 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:24:28.035736 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:24:28.036257 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:24:28.036850 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:24:28.037014 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:24:28.037102 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:24:28.386155 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:24:28.571457 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:24:28.835239 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:24:28.967421 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:24:28.990941 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:24:28.991082 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:24:28.991148 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:24:29.149979 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:24:29.151782 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:24:29.151943 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:24:29.161108 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:24:29.161209 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:24:29.161316 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:24:29.172129 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:24:26.804321 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.304110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.461339 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.462208 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:28.316651 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:28.316701 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:30.876679 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:24:30.882671 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:24:30.883817 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:24:30.883842 2152365 api_server.go:131] duration metric: took 3.971920414s to wait for apiserver health ...
	I0819 13:24:30.883850 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:24:30.883876 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:30.883927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:30.921225 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:30.921255 2152365 cri.go:89] found id: ""
	I0819 13:24:30.921266 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:30.921336 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.925961 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:30.926033 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:30.963834 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:30.963861 2152365 cri.go:89] found id: ""
	I0819 13:24:30.963872 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:30.963947 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.968020 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:30.968085 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:31.004690 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.004717 2152365 cri.go:89] found id: ""
	I0819 13:24:31.004726 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:31.004784 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.008770 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:31.008832 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:31.045412 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.045440 2152365 cri.go:89] found id: ""
	I0819 13:24:31.045451 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:31.045508 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.049682 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:31.049759 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:31.090853 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.090946 2152365 cri.go:89] found id: ""
	I0819 13:24:31.090958 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:31.091037 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.095245 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:31.095311 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:31.138661 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.138693 2152365 cri.go:89] found id: ""
	I0819 13:24:31.138705 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:31.138775 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.143216 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:31.143283 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:31.186554 2152365 cri.go:89] found id: ""
	I0819 13:24:31.186580 2152365 logs.go:276] 0 containers: []
	W0819 13:24:31.186589 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:31.186595 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:31.186645 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:31.226218 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.226257 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:31.226261 2152365 cri.go:89] found id: ""
	I0819 13:24:31.226270 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:31.226333 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.230602 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.234269 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:31.234290 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:31.299071 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:31.299107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.336594 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:31.336633 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:31.745390 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:31.745441 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:31.797446 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:31.797479 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.838882 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:31.838916 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.876830 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:31.876871 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.935426 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:31.935461 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.975182 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:31.975212 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:31.989631 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:31.989663 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:32.093154 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:32.093190 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:32.142869 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:32.142906 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:32.179008 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:32.179043 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:34.726569 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:24:34.726604 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.726609 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.726613 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.726619 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.726622 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.726625 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.726633 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.726636 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.726644 2152365 system_pods.go:74] duration metric: took 3.84278789s to wait for pod list to return data ...
	I0819 13:24:34.726654 2152365 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:24:34.729710 2152365 default_sa.go:45] found service account: "default"
	I0819 13:24:34.729745 2152365 default_sa.go:55] duration metric: took 3.082291ms for default service account to be created ...
	I0819 13:24:34.729758 2152365 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:24:34.734698 2152365 system_pods.go:86] 8 kube-system pods found
	I0819 13:24:34.734734 2152365 system_pods.go:89] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.734744 2152365 system_pods.go:89] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.734752 2152365 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.734758 2152365 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.734765 2152365 system_pods.go:89] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.734768 2152365 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.734775 2152365 system_pods.go:89] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.734784 2152365 system_pods.go:89] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.734792 2152365 system_pods.go:126] duration metric: took 5.027633ms to wait for k8s-apps to be running ...
	I0819 13:24:34.734802 2152365 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:24:34.734863 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:34.756925 2152365 system_svc.go:56] duration metric: took 22.110614ms WaitForService to wait for kubelet
	I0819 13:24:34.756964 2152365 kubeadm.go:582] duration metric: took 4m23.367451416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:24:34.757007 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:24:34.760465 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:24:34.760497 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:24:34.760510 2152365 node_conditions.go:105] duration metric: took 3.498566ms to run NodePressure ...
	I0819 13:24:34.760524 2152365 start.go:241] waiting for startup goroutines ...
	I0819 13:24:34.760531 2152365 start.go:246] waiting for cluster config update ...
	I0819 13:24:34.760552 2152365 start.go:255] writing updated cluster config ...
	I0819 13:24:34.760888 2152365 ssh_runner.go:195] Run: rm -f paused
	I0819 13:24:34.814139 2152365 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:24:34.815906 2152365 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-038572" cluster and "default" namespace by default
	I0819 13:24:31.305059 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.804408 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.804797 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:31.462264 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.961404 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.961719 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.305696 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.804500 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.460369 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.462138 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.804768 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.804803 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.462412 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.961368 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:46.804973 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.304762 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:47.460607 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.461589 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.804815 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:54.304854 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.961120 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:53.962098 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:55.962219 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:56.804112 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:57.298511 2153523 pod_ready.go:82] duration metric: took 4m0.000785617s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:57.298548 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:24:57.298569 2153523 pod_ready.go:39] duration metric: took 4m6.439346081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:57.298603 2153523 kubeadm.go:597] duration metric: took 4m16.103432347s to restartPrimaryControlPlane
	W0819 13:24:57.298676 2153523 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:57.298707 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:58.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:00.961514 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:03.460885 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:05.461736 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:09.173035 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:25:09.173672 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:09.173913 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:07.962080 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:10.461273 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:14.174468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:14.174714 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:12.961725 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:15.461361 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:17.461731 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:19.961772 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:23.413286 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.114546859s)
	I0819 13:25:23.413389 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:23.440190 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:25:23.450707 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:25:23.462984 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:25:23.463003 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:25:23.463046 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:25:23.472381 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:25:23.472437 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:25:23.482090 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:25:23.491789 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:25:23.491845 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:25:23.501722 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.510879 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:25:23.510936 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.520464 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:25:23.529582 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:25:23.529642 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:25:23.539089 2153523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:25:23.585240 2153523 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:25:23.585319 2153523 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:25:23.712867 2153523 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:25:23.712994 2153523 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:25:23.713118 2153523 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:25:23.722519 2153523 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:25:23.724354 2153523 out.go:235]   - Generating certificates and keys ...
	I0819 13:25:23.724436 2153523 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:25:23.724517 2153523 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:25:23.724598 2153523 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:25:23.724648 2153523 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:25:23.724702 2153523 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:25:23.724745 2153523 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:25:23.724795 2153523 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:25:23.724847 2153523 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:25:23.724904 2153523 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:25:23.725051 2153523 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:25:23.725125 2153523 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:25:23.725238 2153523 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:25:23.779977 2153523 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:25:23.930648 2153523 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:25:24.124057 2153523 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:25:24.314884 2153523 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:25:24.494351 2153523 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:25:24.495024 2153523 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:25:24.497778 2153523 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:25:24.175179 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:24.175432 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:24.499543 2153523 out.go:235]   - Booting up control plane ...
	I0819 13:25:24.499671 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:25:24.499762 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:25:24.499864 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:25:24.520379 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:25:24.532777 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:25:24.532853 2153523 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:25:24.666433 2153523 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:25:24.666580 2153523 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:25:25.168265 2153523 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.973317ms
	I0819 13:25:25.168373 2153523 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:25:21.961823 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:24.462014 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:30.169290 2153523 kubeadm.go:310] [api-check] The API server is healthy after 5.001301539s
	I0819 13:25:30.182039 2153523 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:25:30.198499 2153523 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:25:30.224273 2153523 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:25:30.224513 2153523 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-823470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:25:30.236397 2153523 kubeadm.go:310] [bootstrap-token] Using token: uoaia1.lo1g4dbd1uwo2t3t
	I0819 13:25:30.237712 2153523 out.go:235]   - Configuring RBAC rules ...
	I0819 13:25:30.237846 2153523 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:25:30.245121 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:25:30.254416 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:25:30.257701 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:25:30.262104 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:25:30.266592 2153523 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:25:30.577306 2153523 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:25:26.961302 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:28.962130 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:31.051204 2153523 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:25:31.576457 2153523 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:25:31.577557 2153523 kubeadm.go:310] 
	I0819 13:25:31.577644 2153523 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:25:31.577653 2153523 kubeadm.go:310] 
	I0819 13:25:31.577728 2153523 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:25:31.577738 2153523 kubeadm.go:310] 
	I0819 13:25:31.577786 2153523 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:25:31.577872 2153523 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:25:31.577949 2153523 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:25:31.577959 2153523 kubeadm.go:310] 
	I0819 13:25:31.578041 2153523 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:25:31.578051 2153523 kubeadm.go:310] 
	I0819 13:25:31.578111 2153523 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:25:31.578120 2153523 kubeadm.go:310] 
	I0819 13:25:31.578174 2153523 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:25:31.578266 2153523 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:25:31.578354 2153523 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:25:31.578383 2153523 kubeadm.go:310] 
	I0819 13:25:31.578524 2153523 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:25:31.578624 2153523 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:25:31.578641 2153523 kubeadm.go:310] 
	I0819 13:25:31.578765 2153523 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.578886 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:25:31.578922 2153523 kubeadm.go:310] 	--control-plane 
	I0819 13:25:31.578931 2153523 kubeadm.go:310] 
	I0819 13:25:31.579045 2153523 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:25:31.579055 2153523 kubeadm.go:310] 
	I0819 13:25:31.579151 2153523 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.579277 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:25:31.580242 2153523 kubeadm.go:310] W0819 13:25:23.534803    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580611 2153523 kubeadm.go:310] W0819 13:25:23.535422    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580758 2153523 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:25:31.580793 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:25:31.580803 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:25:31.582322 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:25:31.583520 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:25:31.594967 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:25:31.613491 2153523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:25:31.613590 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-823470 minikube.k8s.io/updated_at=2024_08_19T13_25_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=embed-certs-823470 minikube.k8s.io/primary=true
	I0819 13:25:31.613591 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.644190 2153523 ops.go:34] apiserver oom_adj: -16
	I0819 13:25:31.824270 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.324994 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.824955 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.325186 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.824797 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.325222 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.824306 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.325248 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.825154 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.461327 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:33.461401 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:35.455588 2151869 pod_ready.go:82] duration metric: took 4m0.000741427s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	E0819 13:25:35.455630 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:25:35.455654 2151869 pod_ready.go:39] duration metric: took 4m14.499879559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:35.455689 2151869 kubeadm.go:597] duration metric: took 4m22.29129413s to restartPrimaryControlPlane
	W0819 13:25:35.455763 2151869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:25:35.455798 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:25:36.324463 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:36.464343 2153523 kubeadm.go:1113] duration metric: took 4.850819101s to wait for elevateKubeSystemPrivileges
	I0819 13:25:36.464399 2153523 kubeadm.go:394] duration metric: took 4m55.327792066s to StartCluster
	I0819 13:25:36.464441 2153523 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.464553 2153523 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:25:36.466407 2153523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.466694 2153523 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:25:36.466786 2153523 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:25:36.466883 2153523 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-823470"
	I0819 13:25:36.466903 2153523 addons.go:69] Setting default-storageclass=true in profile "embed-certs-823470"
	I0819 13:25:36.466908 2153523 addons.go:69] Setting metrics-server=true in profile "embed-certs-823470"
	I0819 13:25:36.466924 2153523 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-823470"
	W0819 13:25:36.466935 2153523 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:25:36.466943 2153523 addons.go:234] Setting addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:36.466949 2153523 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-823470"
	W0819 13:25:36.466961 2153523 addons.go:243] addon metrics-server should already be in state true
	I0819 13:25:36.466968 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:25:36.466987 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.466998 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.467388 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467421 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467422 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467424 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.468449 2153523 out.go:177] * Verifying Kubernetes components...
	I0819 13:25:36.469946 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:25:36.484726 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0819 13:25:36.484740 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I0819 13:25:36.484824 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0819 13:25:36.485224 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485280 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485303 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485822 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485841 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485824 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485881 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485956 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485982 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.486206 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486260 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486267 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486382 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.486787 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486803 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486819 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.486836 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.490147 2153523 addons.go:234] Setting addon default-storageclass=true in "embed-certs-823470"
	W0819 13:25:36.490169 2153523 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:25:36.490198 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.490599 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.490628 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.506245 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
	I0819 13:25:36.506573 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0819 13:25:36.506955 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507024 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507467 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507485 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507599 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507619 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507896 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.507899 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.508055 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.508657 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.508682 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.509546 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.511452 2153523 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:25:36.512660 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:25:36.512680 2153523 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:25:36.512702 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.515934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516401 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.516422 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516608 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.516810 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.517007 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.517160 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.517459 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0819 13:25:36.517826 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.518316 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.518339 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.518689 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.518859 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.520336 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.521729 2153523 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:25:36.522775 2153523 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.522793 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:25:36.522813 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.525839 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526467 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.526505 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526736 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.527191 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.527384 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.527540 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.527918 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0819 13:25:36.528354 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.528868 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.528890 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.529243 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.529418 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.530881 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.531125 2153523 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.531138 2153523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:25:36.531151 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.534168 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534579 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.534603 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534703 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.534855 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.534995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.535105 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.723562 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:25:36.740697 2153523 node_ready.go:35] waiting up to 6m0s for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749268 2153523 node_ready.go:49] node "embed-certs-823470" has status "Ready":"True"
	I0819 13:25:36.749294 2153523 node_ready.go:38] duration metric: took 8.560448ms for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749306 2153523 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:36.756208 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:36.829190 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.845456 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:25:36.845487 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:25:36.857828 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.928557 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:25:36.928590 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:25:36.979832 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:36.979864 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:25:37.086105 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:37.498852 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.498909 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498924 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499189 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499210 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499218 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499225 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499393 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499392 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499405 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499419 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499426 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499762 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499774 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499771 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499761 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499815 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499835 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510580 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.510601 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.510894 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.510922 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510924 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.721765 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.721791 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722119 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722143 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722171 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.722180 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722502 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722513 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722525 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722538 2153523 addons.go:475] Verifying addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:37.724029 2153523 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:25:37.725177 2153523 addons.go:510] duration metric: took 1.258399079s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:25:38.763132 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:40.765558 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:44.175962 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:44.176268 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:43.263762 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:45.765551 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:46.265160 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.265189 2153523 pod_ready.go:82] duration metric: took 9.50894907s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.265202 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270706 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.270733 2153523 pod_ready.go:82] duration metric: took 5.521974ms for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270745 2153523 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275061 2153523 pod_ready.go:93] pod "etcd-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.275095 2153523 pod_ready.go:82] duration metric: took 4.331701ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275106 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280251 2153523 pod_ready.go:93] pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.280269 2153523 pod_ready.go:82] duration metric: took 5.157209ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280278 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284437 2153523 pod_ready.go:93] pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.284456 2153523 pod_ready.go:82] duration metric: took 4.173028ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284464 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660549 2153523 pod_ready.go:93] pod "kube-proxy-cn9sl" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.660577 2153523 pod_ready.go:82] duration metric: took 376.106459ms for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660588 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060405 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:47.060432 2153523 pod_ready.go:82] duration metric: took 399.837833ms for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060440 2153523 pod_ready.go:39] duration metric: took 10.311121311s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:47.060458 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:25:47.060513 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:25:47.076832 2153523 api_server.go:72] duration metric: took 10.610087948s to wait for apiserver process to appear ...
	I0819 13:25:47.076869 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:25:47.076894 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:25:47.081037 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:25:47.081960 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:25:47.081984 2153523 api_server.go:131] duration metric: took 5.106291ms to wait for apiserver health ...
	I0819 13:25:47.081993 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:25:47.264787 2153523 system_pods.go:59] 9 kube-system pods found
	I0819 13:25:47.264817 2153523 system_pods.go:61] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.264822 2153523 system_pods.go:61] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.264826 2153523 system_pods.go:61] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.264830 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.264835 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.264839 2153523 system_pods.go:61] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.264842 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.264849 2153523 system_pods.go:61] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.264853 2153523 system_pods.go:61] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.264863 2153523 system_pods.go:74] duration metric: took 182.863608ms to wait for pod list to return data ...
	I0819 13:25:47.264870 2153523 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:25:47.461938 2153523 default_sa.go:45] found service account: "default"
	I0819 13:25:47.461968 2153523 default_sa.go:55] duration metric: took 197.091755ms for default service account to be created ...
	I0819 13:25:47.461978 2153523 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:25:47.664070 2153523 system_pods.go:86] 9 kube-system pods found
	I0819 13:25:47.664101 2153523 system_pods.go:89] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.664106 2153523 system_pods.go:89] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.664110 2153523 system_pods.go:89] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.664115 2153523 system_pods.go:89] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.664119 2153523 system_pods.go:89] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.664122 2153523 system_pods.go:89] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.664125 2153523 system_pods.go:89] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.664131 2153523 system_pods.go:89] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.664135 2153523 system_pods.go:89] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.664146 2153523 system_pods.go:126] duration metric: took 202.162032ms to wait for k8s-apps to be running ...
	I0819 13:25:47.664157 2153523 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:25:47.664259 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:47.680000 2153523 system_svc.go:56] duration metric: took 15.832426ms WaitForService to wait for kubelet
	I0819 13:25:47.680036 2153523 kubeadm.go:582] duration metric: took 11.213303025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:25:47.680063 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:25:47.861730 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:25:47.861762 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:25:47.861774 2153523 node_conditions.go:105] duration metric: took 181.706418ms to run NodePressure ...
	I0819 13:25:47.861786 2153523 start.go:241] waiting for startup goroutines ...
	I0819 13:25:47.861793 2153523 start.go:246] waiting for cluster config update ...
	I0819 13:25:47.861804 2153523 start.go:255] writing updated cluster config ...
	I0819 13:25:47.862066 2153523 ssh_runner.go:195] Run: rm -f paused
	I0819 13:25:47.913161 2153523 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:25:47.915029 2153523 out.go:177] * Done! kubectl is now configured to use "embed-certs-823470" cluster and "default" namespace by default
	I0819 13:26:01.591066 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.135241188s)
	I0819 13:26:01.591140 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:01.620641 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:26:01.633612 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:01.647057 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:01.647085 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:01.647144 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:01.663839 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:01.663903 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:01.681570 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:01.692967 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:01.693038 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:01.714743 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.723974 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:01.724044 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.734044 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:01.743276 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:01.743326 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:01.752586 2151869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:01.798481 2151869 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:26:01.798564 2151869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:01.909646 2151869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:01.909801 2151869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:01.909923 2151869 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:26:01.923525 2151869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:01.925556 2151869 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:01.925670 2151869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:01.925760 2151869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:01.925900 2151869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:01.925991 2151869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:01.926102 2151869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:01.926162 2151869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:01.926248 2151869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:01.926334 2151869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:01.926449 2151869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:01.926567 2151869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:01.926632 2151869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:01.926707 2151869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:02.049009 2151869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:02.310249 2151869 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:26:02.425638 2151869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:02.664112 2151869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:03.165747 2151869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:03.166413 2151869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:03.169075 2151869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:03.170768 2151869 out.go:235]   - Booting up control plane ...
	I0819 13:26:03.170893 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:03.171003 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:03.171184 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:03.190081 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:03.197988 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:03.198058 2151869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:03.327052 2151869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:26:03.327180 2151869 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:26:04.332623 2151869 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002268296s
	I0819 13:26:04.332744 2151869 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:26:09.332759 2151869 kubeadm.go:310] [api-check] The API server is healthy after 5.001673024s
	I0819 13:26:09.344645 2151869 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:26:09.359806 2151869 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:26:09.393264 2151869 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:26:09.393499 2151869 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-441627 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:26:09.413036 2151869 kubeadm.go:310] [bootstrap-token] Using token: cva6zd.b3089a5fajgqfd6r
	I0819 13:26:09.414921 2151869 out.go:235]   - Configuring RBAC rules ...
	I0819 13:26:09.415128 2151869 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:26:09.420470 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:26:09.431168 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:26:09.438188 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:26:09.443406 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:26:09.450497 2151869 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:26:09.737815 2151869 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:26:10.165062 2151869 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:26:10.738017 2151869 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:26:10.739202 2151869 kubeadm.go:310] 
	I0819 13:26:10.739309 2151869 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:26:10.739321 2151869 kubeadm.go:310] 
	I0819 13:26:10.739415 2151869 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:26:10.739426 2151869 kubeadm.go:310] 
	I0819 13:26:10.739469 2151869 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:26:10.739558 2151869 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:26:10.739645 2151869 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:26:10.739662 2151869 kubeadm.go:310] 
	I0819 13:26:10.739746 2151869 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:26:10.739758 2151869 kubeadm.go:310] 
	I0819 13:26:10.739833 2151869 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:26:10.739842 2151869 kubeadm.go:310] 
	I0819 13:26:10.739926 2151869 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:26:10.740039 2151869 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:26:10.740116 2151869 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:26:10.740134 2151869 kubeadm.go:310] 
	I0819 13:26:10.740272 2151869 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:26:10.740376 2151869 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:26:10.740387 2151869 kubeadm.go:310] 
	I0819 13:26:10.740487 2151869 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.740651 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:26:10.740689 2151869 kubeadm.go:310] 	--control-plane 
	I0819 13:26:10.740699 2151869 kubeadm.go:310] 
	I0819 13:26:10.740808 2151869 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:26:10.740822 2151869 kubeadm.go:310] 
	I0819 13:26:10.740931 2151869 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.741086 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:26:10.743032 2151869 kubeadm.go:310] W0819 13:26:01.774566    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743353 2151869 kubeadm.go:310] W0819 13:26:01.775291    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743487 2151869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:10.743516 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:26:10.743530 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:26:10.745175 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:26:10.746234 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:26:10.756423 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:26:10.774266 2151869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:26:10.774332 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:10.774349 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-441627 minikube.k8s.io/updated_at=2024_08_19T13_26_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=no-preload-441627 minikube.k8s.io/primary=true
	I0819 13:26:10.800516 2151869 ops.go:34] apiserver oom_adj: -16
	I0819 13:26:10.991369 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.492114 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.992301 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.491433 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.991812 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.492015 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.991704 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.492129 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.992304 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:15.079535 2151869 kubeadm.go:1113] duration metric: took 4.305270284s to wait for elevateKubeSystemPrivileges
	I0819 13:26:15.079607 2151869 kubeadm.go:394] duration metric: took 5m1.9669805s to StartCluster
	I0819 13:26:15.079635 2151869 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.079759 2151869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:26:15.081957 2151869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.082240 2151869 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:26:15.082282 2151869 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:26:15.082374 2151869 addons.go:69] Setting storage-provisioner=true in profile "no-preload-441627"
	I0819 13:26:15.082413 2151869 addons.go:234] Setting addon storage-provisioner=true in "no-preload-441627"
	I0819 13:26:15.082409 2151869 addons.go:69] Setting metrics-server=true in profile "no-preload-441627"
	W0819 13:26:15.082425 2151869 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:26:15.082389 2151869 addons.go:69] Setting default-storageclass=true in profile "no-preload-441627"
	I0819 13:26:15.082449 2151869 addons.go:234] Setting addon metrics-server=true in "no-preload-441627"
	I0819 13:26:15.082461 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	W0819 13:26:15.082463 2151869 addons.go:243] addon metrics-server should already be in state true
	I0819 13:26:15.082479 2151869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-441627"
	I0819 13:26:15.082503 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.082514 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082869 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082920 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082925 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082947 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.083691 2151869 out.go:177] * Verifying Kubernetes components...
	I0819 13:26:15.085493 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:26:15.101376 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
	I0819 13:26:15.102091 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.102685 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.102703 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.103169 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.103181 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0819 13:26:15.103193 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0819 13:26:15.103361 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.103663 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.103732 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.104300 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104322 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.104710 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104730 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.105081 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105409 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105905 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.105938 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.106084 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.106119 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.107248 2151869 addons.go:234] Setting addon default-storageclass=true in "no-preload-441627"
	W0819 13:26:15.107271 2151869 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:26:15.107302 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.107654 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.107713 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.122709 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0819 13:26:15.123128 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.123869 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.123898 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.124416 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.124634 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.126217 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I0819 13:26:15.126791 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.127400 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.127419 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.127698 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.128022 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.128336 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.129232 2151869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:26:15.130058 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.130226 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:26:15.130243 2151869 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:26:15.130262 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.131335 2151869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:26:15.131652 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45447
	I0819 13:26:15.132227 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.132570 2151869 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.132586 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:26:15.132605 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.133517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134218 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.134244 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134600 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.134738 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.134755 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.134852 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.135030 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.135097 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.135284 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.135673 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.135702 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.136748 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137177 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.137197 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137442 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.137613 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.137765 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.137924 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.154937 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0819 13:26:15.155407 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.155835 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.155847 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.156238 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.156416 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.158575 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.158795 2151869 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.158805 2151869 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:26:15.158818 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.162078 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162519 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.162534 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162825 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.162977 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.163114 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.163198 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.309062 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:26:15.341525 2151869 node_ready.go:35] waiting up to 6m0s for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354270 2151869 node_ready.go:49] node "no-preload-441627" has status "Ready":"True"
	I0819 13:26:15.354307 2151869 node_ready.go:38] duration metric: took 12.742932ms for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354320 2151869 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:15.363385 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:15.427138 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.429228 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.517567 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:26:15.517597 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:26:15.626356 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:26:15.626393 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:26:15.677045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677072 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677414 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677471 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.677483 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677490 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677729 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677749 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.683924 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.683943 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.684224 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.684249 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.684252 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:15.701949 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:15.701980 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:26:15.743920 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:16.110007 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110035 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110393 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110415 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.110426 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110439 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110691 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110712 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426076 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426617 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.426639 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426650 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426660 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426669 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426948 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426994 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.427011 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.427024 2151869 addons.go:475] Verifying addon metrics-server=true in "no-preload-441627"
	I0819 13:26:16.428916 2151869 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 13:26:16.430020 2151869 addons.go:510] duration metric: took 1.347734507s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 13:26:17.369079 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:19.369662 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:21.369721 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:23.368863 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.368886 2151869 pod_ready.go:82] duration metric: took 8.005467546s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.368895 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373851 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.373873 2151869 pod_ready.go:82] duration metric: took 4.971429ms for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373884 2151869 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378381 2151869 pod_ready.go:93] pod "etcd-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.378405 2151869 pod_ready.go:82] duration metric: took 4.513601ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378417 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382700 2151869 pod_ready.go:93] pod "kube-apiserver-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.382719 2151869 pod_ready.go:82] duration metric: took 4.295583ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382728 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387145 2151869 pod_ready.go:93] pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.387168 2151869 pod_ready.go:82] duration metric: took 4.434097ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387179 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768694 2151869 pod_ready.go:93] pod "kube-proxy-9jhss" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.768722 2151869 pod_ready.go:82] duration metric: took 381.534902ms for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768735 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167861 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:24.167888 2151869 pod_ready.go:82] duration metric: took 399.144976ms for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167896 2151869 pod_ready.go:39] duration metric: took 8.81356327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:24.167913 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:26:24.167967 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:26:24.187913 2151869 api_server.go:72] duration metric: took 9.105632249s to wait for apiserver process to appear ...
	I0819 13:26:24.187938 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:26:24.187960 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:26:24.192758 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:26:24.193797 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:26:24.193822 2151869 api_server.go:131] duration metric: took 5.87693ms to wait for apiserver health ...
	I0819 13:26:24.193833 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:26:24.371261 2151869 system_pods.go:59] 9 kube-system pods found
	I0819 13:26:24.371291 2151869 system_pods.go:61] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.371296 2151869 system_pods.go:61] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.371300 2151869 system_pods.go:61] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.371304 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.371308 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.371312 2151869 system_pods.go:61] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.371317 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.371325 2151869 system_pods.go:61] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.371331 2151869 system_pods.go:61] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.371342 2151869 system_pods.go:74] duration metric: took 177.503046ms to wait for pod list to return data ...
	I0819 13:26:24.371355 2151869 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:26:24.568039 2151869 default_sa.go:45] found service account: "default"
	I0819 13:26:24.568068 2151869 default_sa.go:55] duration metric: took 196.702869ms for default service account to be created ...
	I0819 13:26:24.568078 2151869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:26:24.771705 2151869 system_pods.go:86] 9 kube-system pods found
	I0819 13:26:24.771734 2151869 system_pods.go:89] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.771741 2151869 system_pods.go:89] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.771745 2151869 system_pods.go:89] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.771749 2151869 system_pods.go:89] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.771753 2151869 system_pods.go:89] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.771756 2151869 system_pods.go:89] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.771759 2151869 system_pods.go:89] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.771765 2151869 system_pods.go:89] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.771770 2151869 system_pods.go:89] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.771781 2151869 system_pods.go:126] duration metric: took 203.697331ms to wait for k8s-apps to be running ...
	I0819 13:26:24.771794 2151869 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:26:24.771837 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.788088 2151869 system_svc.go:56] duration metric: took 16.285619ms WaitForService to wait for kubelet
	I0819 13:26:24.788122 2151869 kubeadm.go:582] duration metric: took 9.705841529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:26:24.788157 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:26:24.968341 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:26:24.968377 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:26:24.968392 2151869 node_conditions.go:105] duration metric: took 180.211418ms to run NodePressure ...
	I0819 13:26:24.968409 2151869 start.go:241] waiting for startup goroutines ...
	I0819 13:26:24.968418 2151869 start.go:246] waiting for cluster config update ...
	I0819 13:26:24.968429 2151869 start.go:255] writing updated cluster config ...
	I0819 13:26:24.968747 2151869 ssh_runner.go:195] Run: rm -f paused
	I0819 13:26:25.025595 2151869 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:26:25.027536 2151869 out.go:177] * Done! kubectl is now configured to use "no-preload-441627" cluster and "default" namespace by default
	I0819 13:26:24.177907 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:26:24.178201 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:26:24.178214 2152780 kubeadm.go:310] 
	I0819 13:26:24.178291 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:26:24.178372 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:26:24.178382 2152780 kubeadm.go:310] 
	I0819 13:26:24.178424 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:26:24.178476 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:26:24.178656 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:26:24.178674 2152780 kubeadm.go:310] 
	I0819 13:26:24.178792 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:26:24.178834 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:26:24.178879 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:26:24.178885 2152780 kubeadm.go:310] 
	I0819 13:26:24.179039 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:26:24.179150 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:26:24.179160 2152780 kubeadm.go:310] 
	I0819 13:26:24.179343 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:26:24.179495 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:26:24.179612 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:26:24.179737 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:26:24.179753 2152780 kubeadm.go:310] 
	I0819 13:26:24.180394 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:24.180518 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:26:24.180601 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 13:26:24.180756 2152780 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 13:26:24.180807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:26:24.684112 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.697957 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:24.707579 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:24.707600 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:24.707649 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:24.717570 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:24.717642 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:24.727069 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:24.735692 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:24.735757 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:24.744522 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.753036 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:24.753090 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.763455 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:24.772780 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:24.772839 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:24.782898 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:24.856625 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:26:24.856732 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:25.003965 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:25.004118 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:25.004294 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:26:25.205556 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:25.207347 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:25.207462 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:25.207553 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:25.207671 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:25.207759 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:25.207840 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:25.207910 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:25.207978 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:25.208063 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:25.208526 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:25.209357 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:25.209668 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:25.209816 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:25.424000 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:25.667776 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:25.723895 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:25.817349 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:25.834327 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:25.836860 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:25.837016 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:25.992443 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:25.994085 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:26:25.994223 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:26.007880 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:26.009553 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:26.013310 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:26.014264 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:27:06.016907 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:27:06.017042 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:06.017274 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:11.017468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:11.017758 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:21.018328 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:21.018585 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:41.019394 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:41.019632 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018489 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:28:21.018784 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018807 2152780 kubeadm.go:310] 
	I0819 13:28:21.018879 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:28:21.018963 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:28:21.018971 2152780 kubeadm.go:310] 
	I0819 13:28:21.019000 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:28:21.019030 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:28:21.019122 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:28:21.019130 2152780 kubeadm.go:310] 
	I0819 13:28:21.019233 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:28:21.019276 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:28:21.019329 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:28:21.019337 2152780 kubeadm.go:310] 
	I0819 13:28:21.019471 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:28:21.019607 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:28:21.019620 2152780 kubeadm.go:310] 
	I0819 13:28:21.019776 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:28:21.019895 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:28:21.020016 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:28:21.020125 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:28:21.020139 2152780 kubeadm.go:310] 
	I0819 13:28:21.020943 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:28:21.021058 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:28:21.021158 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 13:28:21.021292 2152780 kubeadm.go:394] duration metric: took 7m56.942541209s to StartCluster
	I0819 13:28:21.021375 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:28:21.021456 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:28:21.068572 2152780 cri.go:89] found id: ""
	I0819 13:28:21.068613 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.068624 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:28:21.068633 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:28:21.068704 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:28:21.108484 2152780 cri.go:89] found id: ""
	I0819 13:28:21.108514 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.108532 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:28:21.108540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:28:21.108611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:28:21.151863 2152780 cri.go:89] found id: ""
	I0819 13:28:21.151894 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.151905 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:28:21.151914 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:28:21.151992 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:28:21.191844 2152780 cri.go:89] found id: ""
	I0819 13:28:21.191873 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.191883 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:28:21.191891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:28:21.191970 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:28:21.230545 2152780 cri.go:89] found id: ""
	I0819 13:28:21.230580 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.230588 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:28:21.230595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:28:21.230650 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:28:21.264157 2152780 cri.go:89] found id: ""
	I0819 13:28:21.264211 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.264225 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:28:21.264233 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:28:21.264301 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:28:21.297523 2152780 cri.go:89] found id: ""
	I0819 13:28:21.297558 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.297566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:28:21.297573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:28:21.297630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:28:21.336765 2152780 cri.go:89] found id: ""
	I0819 13:28:21.336793 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.336802 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:28:21.336815 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:28:21.336839 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:28:21.387668 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:28:21.387711 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:28:21.402131 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:28:21.402173 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:28:21.473587 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:28:21.473615 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:28:21.473632 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:28:21.576567 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:28:21.576610 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 13:28:21.614859 2152780 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 13:28:21.614919 2152780 out.go:270] * 
	W0819 13:28:21.615011 2152780 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.615032 2152780 out.go:270] * 
	W0819 13:28:21.615837 2152780 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:28:21.619494 2152780 out.go:201] 
	W0819 13:28:21.620743 2152780 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.620793 2152780 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 13:28:21.620813 2152780 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 13:28:21.622143 2152780 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.891736424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074876891704455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed4e2d9f-273d-4a10-8106-4e409ef9613b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.892617347Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70394d48-a3f4-4660-8c3d-30c305503dc9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.892778104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70394d48-a3f4-4660-8c3d-30c305503dc9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.893388335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724073639967559345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cd76f5a61723309dd1e5a4796fc847240477d1940a00cb2aadaf65bbe25388,PodSandboxId:d9fdfe858c14993e6fb854ed1050ce8716d1a3542d42fb30238014a43462492c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724073617986323952,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 984019b3-bc63-4430-a519-0ed7bd8451e0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6,PodSandboxId:0fc6d816561d4db4569603073f98db07f5074233c9b5ad140082e57c68096dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073616842776437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8tl4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eec97b9-9cf7-48a1-9025-744336d756f8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226,PodSandboxId:f2df4fd240be105b7fd9a75ef003f89ecb16a1bd8b06c851a261b851fa94ee6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724073609105552862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wnkk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c07972-b
838-4523-ac86-3cd5dd597c04,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724073609078325229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-
a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872,PodSandboxId:1399a9b9e890d587027cf0714a54dce09e3b3661355dededabbfe5422755ff95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073605414658556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19667de952ca5ecbb6133a
19ca1c20b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27,PodSandboxId:c6e0c0b82e6e9d0aec54b2b0db99ec976ccb33a2b9cb925d26370701300d81f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073605382876962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 6ee53cb96a99c60287d81c0061088ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b,PodSandboxId:f4d07cd5c32173ce1b577dd221cdf94cbcbb6fa94d40d602ead1df9167a38efe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073605349751545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7cb8a1a886a4ea0c0ff3a670f86104e5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1,PodSandboxId:867295a3fc961bdc54e7bd34f50acea3b9ec75e86fb561e8033eb8cf8a04aaa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073605320239383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a681ecb5476da066a41bc2fbd21811
7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70394d48-a3f4-4660-8c3d-30c305503dc9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.936452387Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16f6f995-ed4f-420e-848c-fcc53dfa98f0 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.936574337Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16f6f995-ed4f-420e-848c-fcc53dfa98f0 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.938107694Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35bfa933-2ee4-4caf-aacf-55078aa1ad01 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.938714778Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074876938679825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35bfa933-2ee4-4caf-aacf-55078aa1ad01 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.939583741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dce58b0a-0800-4486-964a-46f2394bcacd name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.939677276Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dce58b0a-0800-4486-964a-46f2394bcacd name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.940024983Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724073639967559345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cd76f5a61723309dd1e5a4796fc847240477d1940a00cb2aadaf65bbe25388,PodSandboxId:d9fdfe858c14993e6fb854ed1050ce8716d1a3542d42fb30238014a43462492c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724073617986323952,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 984019b3-bc63-4430-a519-0ed7bd8451e0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6,PodSandboxId:0fc6d816561d4db4569603073f98db07f5074233c9b5ad140082e57c68096dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073616842776437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8tl4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eec97b9-9cf7-48a1-9025-744336d756f8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226,PodSandboxId:f2df4fd240be105b7fd9a75ef003f89ecb16a1bd8b06c851a261b851fa94ee6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724073609105552862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wnkk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c07972-b
838-4523-ac86-3cd5dd597c04,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724073609078325229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-
a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872,PodSandboxId:1399a9b9e890d587027cf0714a54dce09e3b3661355dededabbfe5422755ff95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073605414658556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19667de952ca5ecbb6133a
19ca1c20b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27,PodSandboxId:c6e0c0b82e6e9d0aec54b2b0db99ec976ccb33a2b9cb925d26370701300d81f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073605382876962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 6ee53cb96a99c60287d81c0061088ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b,PodSandboxId:f4d07cd5c32173ce1b577dd221cdf94cbcbb6fa94d40d602ead1df9167a38efe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073605349751545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7cb8a1a886a4ea0c0ff3a670f86104e5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1,PodSandboxId:867295a3fc961bdc54e7bd34f50acea3b9ec75e86fb561e8033eb8cf8a04aaa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073605320239383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a681ecb5476da066a41bc2fbd21811
7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dce58b0a-0800-4486-964a-46f2394bcacd name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.979803290Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=466c6b65-8317-463e-be7f-2ef9903b49f4 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.979881402Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=466c6b65-8317-463e-be7f-2ef9903b49f4 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.981137296Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f511f824-1d5d-45c8-88de-9529bcab8e3a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.981545810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074876981525234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f511f824-1d5d-45c8-88de-9529bcab8e3a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.982098158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9919b77-42da-4308-a54d-28f33651115c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.982168599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9919b77-42da-4308-a54d-28f33651115c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:41:16 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:16.982369902Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724073639967559345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cd76f5a61723309dd1e5a4796fc847240477d1940a00cb2aadaf65bbe25388,PodSandboxId:d9fdfe858c14993e6fb854ed1050ce8716d1a3542d42fb30238014a43462492c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724073617986323952,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 984019b3-bc63-4430-a519-0ed7bd8451e0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6,PodSandboxId:0fc6d816561d4db4569603073f98db07f5074233c9b5ad140082e57c68096dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073616842776437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8tl4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eec97b9-9cf7-48a1-9025-744336d756f8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226,PodSandboxId:f2df4fd240be105b7fd9a75ef003f89ecb16a1bd8b06c851a261b851fa94ee6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724073609105552862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wnkk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c07972-b
838-4523-ac86-3cd5dd597c04,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724073609078325229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-
a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872,PodSandboxId:1399a9b9e890d587027cf0714a54dce09e3b3661355dededabbfe5422755ff95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073605414658556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19667de952ca5ecbb6133a
19ca1c20b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27,PodSandboxId:c6e0c0b82e6e9d0aec54b2b0db99ec976ccb33a2b9cb925d26370701300d81f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073605382876962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 6ee53cb96a99c60287d81c0061088ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b,PodSandboxId:f4d07cd5c32173ce1b577dd221cdf94cbcbb6fa94d40d602ead1df9167a38efe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073605349751545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7cb8a1a886a4ea0c0ff3a670f86104e5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1,PodSandboxId:867295a3fc961bdc54e7bd34f50acea3b9ec75e86fb561e8033eb8cf8a04aaa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073605320239383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a681ecb5476da066a41bc2fbd21811
7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9919b77-42da-4308-a54d-28f33651115c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:41:17 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:17.018378501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec011add-5e85-4f15-9bf6-85297904c978 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:41:17 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:17.018475010Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec011add-5e85-4f15-9bf6-85297904c978 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:41:17 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:17.019720763Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a28c27c-2035-439a-876b-27d84f031bb1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:41:17 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:17.020606738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074877020579757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a28c27c-2035-439a-876b-27d84f031bb1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:41:17 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:17.021136465Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcf77a8b-d29c-4117-b467-49c1844569ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:41:17 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:17.021188912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcf77a8b-d29c-4117-b467-49c1844569ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:41:17 default-k8s-diff-port-038572 crio[732]: time="2024-08-19 13:41:17.021389215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724073639967559345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cd76f5a61723309dd1e5a4796fc847240477d1940a00cb2aadaf65bbe25388,PodSandboxId:d9fdfe858c14993e6fb854ed1050ce8716d1a3542d42fb30238014a43462492c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724073617986323952,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 984019b3-bc63-4430-a519-0ed7bd8451e0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6,PodSandboxId:0fc6d816561d4db4569603073f98db07f5074233c9b5ad140082e57c68096dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073616842776437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8tl4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eec97b9-9cf7-48a1-9025-744336d756f8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226,PodSandboxId:f2df4fd240be105b7fd9a75ef003f89ecb16a1bd8b06c851a261b851fa94ee6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724073609105552862,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wnkk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c07972-b
838-4523-ac86-3cd5dd597c04,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778,PodSandboxId:2557921b1e5afe96cdb615a0af92f1b01556cefa9b80166e2d24f312924e1c59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724073609078325229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949a9312-bef2-4c56-af13-
a895bec28e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872,PodSandboxId:1399a9b9e890d587027cf0714a54dce09e3b3661355dededabbfe5422755ff95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073605414658556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19667de952ca5ecbb6133a
19ca1c20b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27,PodSandboxId:c6e0c0b82e6e9d0aec54b2b0db99ec976ccb33a2b9cb925d26370701300d81f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073605382876962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 6ee53cb96a99c60287d81c0061088ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b,PodSandboxId:f4d07cd5c32173ce1b577dd221cdf94cbcbb6fa94d40d602ead1df9167a38efe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073605349751545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7cb8a1a886a4ea0c0ff3a670f86104e5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1,PodSandboxId:867295a3fc961bdc54e7bd34f50acea3b9ec75e86fb561e8033eb8cf8a04aaa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073605320239383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-038572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a681ecb5476da066a41bc2fbd21811
7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcf77a8b-d29c-4117-b467-49c1844569ea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f547b541f49f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       3                   2557921b1e5af       storage-provisioner
	f9cd76f5a6172       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   d9fdfe858c149       busybox
	0e12220c2cd13       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   0fc6d816561d4       coredns-6f6b679f8f-8tl4c
	0eea9deb6fa46       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      21 minutes ago      Running             kube-proxy                1                   f2df4fd240be1       kube-proxy-wnkk2
	8613bb1986481       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       2                   2557921b1e5af       storage-provisioner
	875409a409653       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      21 minutes ago      Running             kube-scheduler            1                   1399a9b9e890d       kube-scheduler-default-k8s-diff-port-038572
	90eea23cc577e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      21 minutes ago      Running             kube-controller-manager   1                   c6e0c0b82e6e9       kube-controller-manager-default-k8s-diff-port-038572
	c977986999bce       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      21 minutes ago      Running             kube-apiserver            1                   f4d07cd5c3217       kube-apiserver-default-k8s-diff-port-038572
	9e2caf34bd614       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   867295a3fc961       etcd-default-k8s-diff-port-038572
	
	
	==> coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55117 - 14713 "HINFO IN 3375249004560547600.4191906258613393435. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014842328s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-038572
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-038572
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=default-k8s-diff-port-038572
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T13_12_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 13:12:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-038572
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 13:41:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 13:41:02 +0000   Mon, 19 Aug 2024 13:12:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 13:41:02 +0000   Mon, 19 Aug 2024 13:12:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 13:41:02 +0000   Mon, 19 Aug 2024 13:12:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 13:41:02 +0000   Mon, 19 Aug 2024 13:20:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.69
	  Hostname:    default-k8s-diff-port-038572
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3611796363d492a87c9864b44edb9db
	  System UUID:                e3611796-363d-492a-87c9-864b44edb9db
	  Boot ID:                    429e821f-4b53-4eb5-b3fa-7101007e33fe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-6f6b679f8f-8tl4c                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-default-k8s-diff-port-038572                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-038572             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-038572    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-wnkk2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-038572             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-hmghm                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-038572 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-038572 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-038572 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-038572 status is now: NodeReady
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-038572 event: Registered Node default-k8s-diff-port-038572 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-038572 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-038572 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-038572 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-038572 event: Registered Node default-k8s-diff-port-038572 in Controller
	
	
	==> dmesg <==
	[Aug19 13:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050317] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041187] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.770120] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.548419] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.605071] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.928103] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.058915] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062557] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.181465] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.137365] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.291236] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[Aug19 13:20] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.060240] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.183138] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +4.585180] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.458945] systemd-fstab-generator[1565]: Ignoring "noauto" option for root device
	[  +3.303364] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.788745] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] <==
	{"level":"info","ts":"2024-08-19T13:20:07.155641Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.69:2379"}
	{"level":"info","ts":"2024-08-19T13:20:22.617358Z","caller":"traceutil/trace.go:171","msg":"trace[1341966766] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"131.026461ms","start":"2024-08-19T13:20:22.486317Z","end":"2024-08-19T13:20:22.617343Z","steps":["trace[1341966766] 'process raft request'  (duration: 130.870893ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T13:20:22.795784Z","caller":"traceutil/trace.go:171","msg":"trace[1219515661] linearizableReadLoop","detail":"{readStateIndex:638; appliedIndex:637; }","duration":"137.469375ms","start":"2024-08-19T13:20:22.658295Z","end":"2024-08-19T13:20:22.795764Z","steps":["trace[1219515661] 'read index received'  (duration: 111.835711ms)","trace[1219515661] 'applied index is now lower than readState.Index'  (duration: 25.632967ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T13:20:22.796127Z","caller":"traceutil/trace.go:171","msg":"trace[1221977632] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"167.37966ms","start":"2024-08-19T13:20:22.628729Z","end":"2024-08-19T13:20:22.796109Z","steps":["trace[1221977632] 'process raft request'  (duration: 141.47724ms)","trace[1221977632] 'compare'  (duration: 25.35211ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T13:20:22.796244Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.933036ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-038572\" ","response":"range_response_count:1 size:5736"}
	{"level":"info","ts":"2024-08-19T13:20:22.796666Z","caller":"traceutil/trace.go:171","msg":"trace[1504298882] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-038572; range_end:; response_count:1; response_revision:601; }","duration":"138.386941ms","start":"2024-08-19T13:20:22.658265Z","end":"2024-08-19T13:20:22.796652Z","steps":["trace[1504298882] 'agreement among raft nodes before linearized reading'  (duration: 137.900507ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T13:20:23.324619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.821503ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8983152294610211751 > lease_revoke:<id:7caa916ac43d1302>","response":"size:29"}
	{"level":"warn","ts":"2024-08-19T13:20:23.426883Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.110268ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8983152294610211752 > lease_revoke:<id:7caa916ac43d12a1>","response":"size:29"}
	{"level":"info","ts":"2024-08-19T13:21:08.056348Z","caller":"traceutil/trace.go:171","msg":"trace[1502655456] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"324.655104ms","start":"2024-08-19T13:21:07.731668Z","end":"2024-08-19T13:21:08.056323Z","steps":["trace[1502655456] 'process raft request'  (duration: 285.934465ms)","trace[1502655456] 'compare'  (duration: 37.843883ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T13:21:08.057513Z","caller":"traceutil/trace.go:171","msg":"trace[1952833897] linearizableReadLoop","detail":"{readStateIndex:687; appliedIndex:686; }","duration":"253.384215ms","start":"2024-08-19T13:21:07.802324Z","end":"2024-08-19T13:21:08.055709Z","steps":["trace[1952833897] 'read index received'  (duration: 215.240219ms)","trace[1952833897] 'applied index is now lower than readState.Index'  (duration: 38.143327ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T13:21:08.057859Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.511047ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-hmghm.17ed23d1ad906ee1\" ","response":"range_response_count:1 size:804"}
	{"level":"info","ts":"2024-08-19T13:21:08.057932Z","caller":"traceutil/trace.go:171","msg":"trace[345354892] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-6867b74b74-hmghm.17ed23d1ad906ee1; range_end:; response_count:1; response_revision:639; }","duration":"255.603254ms","start":"2024-08-19T13:21:07.802320Z","end":"2024-08-19T13:21:08.057923Z","steps":["trace[345354892] 'agreement among raft nodes before linearized reading'  (duration: 255.380523ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T13:21:08.059113Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T13:21:07.731655Z","time spent":"324.802273ms","remote":"127.0.0.1:49842","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4379,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-6867b74b74-hmghm\" mod_revision:625 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-hmghm\" value_size:4313 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-hmghm\" > >"}
	{"level":"warn","ts":"2024-08-19T13:21:08.059299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.701469ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-hmghm\" ","response":"range_response_count:1 size:4394"}
	{"level":"info","ts":"2024-08-19T13:21:08.059666Z","caller":"traceutil/trace.go:171","msg":"trace[367566211] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-hmghm; range_end:; response_count:1; response_revision:639; }","duration":"231.064528ms","start":"2024-08-19T13:21:07.828589Z","end":"2024-08-19T13:21:08.059653Z","steps":["trace[367566211] 'agreement among raft nodes before linearized reading'  (duration: 230.674701ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T13:21:10.224536Z","caller":"traceutil/trace.go:171","msg":"trace[1098891225] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"123.493196ms","start":"2024-08-19T13:21:10.101025Z","end":"2024-08-19T13:21:10.224518Z","steps":["trace[1098891225] 'process raft request'  (duration: 123.390338ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T13:30:07.178163Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":845}
	{"level":"info","ts":"2024-08-19T13:30:07.191938Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":845,"took":"13.419339ms","hash":1896233454,"current-db-size-bytes":2854912,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2854912,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-08-19T13:30:07.192134Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1896233454,"revision":845,"compact-revision":-1}
	{"level":"info","ts":"2024-08-19T13:35:07.185848Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1087}
	{"level":"info","ts":"2024-08-19T13:35:07.190241Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1087,"took":"4.068819ms","hash":135250772,"current-db-size-bytes":2854912,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1691648,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-08-19T13:35:07.190301Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":135250772,"revision":1087,"compact-revision":845}
	{"level":"info","ts":"2024-08-19T13:40:07.197219Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1330}
	{"level":"info","ts":"2024-08-19T13:40:07.201218Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1330,"took":"3.69989ms","hash":2743522576,"current-db-size-bytes":2854912,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1658880,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-08-19T13:40:07.201271Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2743522576,"revision":1330,"compact-revision":1087}
	
	
	==> kernel <==
	 13:41:17 up 21 min,  0 users,  load average: 0.08, 0.14, 0.11
	Linux default-k8s-diff-port-038572 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] <==
	I0819 13:38:09.464041       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:38:09.464096       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 13:40:08.465319       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:40:08.465447       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 13:40:09.467105       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:40:09.467244       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 13:40:09.467326       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:40:09.467339       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 13:40:09.468389       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:40:09.468448       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 13:41:09.468776       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:41:09.468945       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 13:41:09.469116       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:41:09.469186       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 13:41:09.470186       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:41:09.470227       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] <==
	E0819 13:36:12.192766       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:36:12.665282       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 13:36:18.742610       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="228.858µs"
	E0819 13:36:42.200672       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:36:42.673271       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:37:12.207897       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:37:12.681837       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:37:42.213638       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:37:42.688894       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:38:12.220269       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:38:12.696565       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:38:42.226493       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:38:42.703354       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:39:12.233262       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:39:12.712875       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:39:42.239677       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:39:42.721419       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:40:12.246512       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:40:12.732804       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:40:42.253567       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:40:42.740506       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 13:41:02.236919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-038572"
	E0819 13:41:12.259841       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:41:12.748742       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 13:41:14.746752       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="339.331µs"
	
	
	==> kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 13:20:09.284929       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 13:20:09.293053       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.69"]
	E0819 13:20:09.293112       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 13:20:09.326662       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 13:20:09.326708       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 13:20:09.326738       1 server_linux.go:169] "Using iptables Proxier"
	I0819 13:20:09.329171       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 13:20:09.329458       1 server.go:483] "Version info" version="v1.31.0"
	I0819 13:20:09.329482       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:20:09.331163       1 config.go:197] "Starting service config controller"
	I0819 13:20:09.331202       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 13:20:09.331226       1 config.go:104] "Starting endpoint slice config controller"
	I0819 13:20:09.331229       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 13:20:09.331930       1 config.go:326] "Starting node config controller"
	I0819 13:20:09.332021       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 13:20:09.432189       1 shared_informer.go:320] Caches are synced for node config
	I0819 13:20:09.432242       1 shared_informer.go:320] Caches are synced for service config
	I0819 13:20:09.432253       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] <==
	I0819 13:20:06.470700       1 serving.go:386] Generated self-signed cert in-memory
	W0819 13:20:08.442059       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 13:20:08.442157       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 13:20:08.442168       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 13:20:08.442175       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 13:20:08.478597       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 13:20:08.478646       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:20:08.483484       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 13:20:08.483534       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 13:20:08.484222       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 13:20:08.484295       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 13:20:08.584156       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 13:40:24 default-k8s-diff-port-038572 kubelet[945]: E0819 13:40:24.726344     945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hmghm" podUID="73577eaa-4632-432b-8c5f-7421646873a3"
	Aug 19 13:40:24 default-k8s-diff-port-038572 kubelet[945]: E0819 13:40:24.976374     945 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074824976032151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:40:24 default-k8s-diff-port-038572 kubelet[945]: E0819 13:40:24.977050     945 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074824976032151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:40:34 default-k8s-diff-port-038572 kubelet[945]: E0819 13:40:34.978532     945 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074834978082747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:40:34 default-k8s-diff-port-038572 kubelet[945]: E0819 13:40:34.978564     945 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074834978082747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:40:35 default-k8s-diff-port-038572 kubelet[945]: E0819 13:40:35.724671     945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hmghm" podUID="73577eaa-4632-432b-8c5f-7421646873a3"
	Aug 19 13:40:44 default-k8s-diff-port-038572 kubelet[945]: E0819 13:40:44.980268     945 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074844979424813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:40:44 default-k8s-diff-port-038572 kubelet[945]: E0819 13:40:44.980310     945 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074844979424813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:40:48 default-k8s-diff-port-038572 kubelet[945]: E0819 13:40:48.727656     945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hmghm" podUID="73577eaa-4632-432b-8c5f-7421646873a3"
	Aug 19 13:40:54 default-k8s-diff-port-038572 kubelet[945]: E0819 13:40:54.983042     945 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074854982123045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:40:54 default-k8s-diff-port-038572 kubelet[945]: E0819 13:40:54.983296     945 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074854982123045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:41:01 default-k8s-diff-port-038572 kubelet[945]: E0819 13:41:01.739876     945 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 19 13:41:01 default-k8s-diff-port-038572 kubelet[945]: E0819 13:41:01.740342     945 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 19 13:41:01 default-k8s-diff-port-038572 kubelet[945]: E0819 13:41:01.740577     945 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4q5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPr
opagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:
nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-hmghm_kube-system(73577eaa-4632-432b-8c5f-7421646873a3): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Aug 19 13:41:01 default-k8s-diff-port-038572 kubelet[945]: E0819 13:41:01.742062     945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-hmghm" podUID="73577eaa-4632-432b-8c5f-7421646873a3"
	Aug 19 13:41:04 default-k8s-diff-port-038572 kubelet[945]: E0819 13:41:04.740319     945 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 13:41:04 default-k8s-diff-port-038572 kubelet[945]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 13:41:04 default-k8s-diff-port-038572 kubelet[945]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 13:41:04 default-k8s-diff-port-038572 kubelet[945]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 13:41:04 default-k8s-diff-port-038572 kubelet[945]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 13:41:04 default-k8s-diff-port-038572 kubelet[945]: E0819 13:41:04.986460     945 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074864985513032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:41:04 default-k8s-diff-port-038572 kubelet[945]: E0819 13:41:04.986489     945 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074864985513032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:41:14 default-k8s-diff-port-038572 kubelet[945]: E0819 13:41:14.730204     945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hmghm" podUID="73577eaa-4632-432b-8c5f-7421646873a3"
	Aug 19 13:41:14 default-k8s-diff-port-038572 kubelet[945]: E0819 13:41:14.989485     945 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074874989063984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:41:14 default-k8s-diff-port-038572 kubelet[945]: E0819 13:41:14.989514     945 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074874989063984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] <==
	I0819 13:20:09.188775       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 13:20:39.196715       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] <==
	I0819 13:20:40.076345       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 13:20:40.094521       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 13:20:40.094709       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 13:20:57.499537       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 13:20:57.500212       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4aba67e4-6226-4657-89c8-b0ed902c3845", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-038572_13afe944-8f32-45ac-845a-21f59001e97d became leader
	I0819 13:20:57.500840       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-038572_13afe944-8f32-45ac-845a-21f59001e97d!
	I0819 13:20:57.603139       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-038572_13afe944-8f32-45ac-845a-21f59001e97d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-038572 -n default-k8s-diff-port-038572
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-038572 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-hmghm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-038572 describe pod metrics-server-6867b74b74-hmghm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-038572 describe pod metrics-server-6867b74b74-hmghm: exit status 1 (67.837847ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-hmghm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-038572 describe pod metrics-server-6867b74b74-hmghm: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (460.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (530.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-823470 -n embed-certs-823470
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-19 13:43:38.731624976 +0000 UTC m=+6419.366415255
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-823470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-823470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.993µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-823470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-823470 -n embed-certs-823470
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-823470 logs -n 25
E0819 13:43:39.618039 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:43:40.094801 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-823470 logs -n 25: (2.076147184s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p newest-cni-093052                  | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-093052 --memory=2200 --alsologtostderr   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-038572  | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC |                     |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-093052 image list                           | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-441627                  | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-620467        | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-823470            | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-038572       | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:24 UTC |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-620467             | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-823470                 | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC | 19 Aug 24 13:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:40 UTC | 19 Aug 24 13:40 UTC |
	| delete  | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:40 UTC | 19 Aug 24 13:40 UTC |
	| delete  | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:41 UTC | 19 Aug 24 13:41 UTC |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 13:17:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 13:17:40.922225 2153523 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:17:40.922462 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922470 2153523 out.go:358] Setting ErrFile to fd 2...
	I0819 13:17:40.922475 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922631 2153523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:17:40.923154 2153523 out.go:352] Setting JSON to false
	I0819 13:17:40.924134 2153523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":165611,"bootTime":1723907850,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:17:40.924233 2153523 start.go:139] virtualization: kvm guest
	I0819 13:17:40.927254 2153523 out.go:177] * [embed-certs-823470] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:17:40.928415 2153523 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:17:40.928457 2153523 notify.go:220] Checking for updates...
	I0819 13:17:40.930421 2153523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:17:40.931440 2153523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:17:40.932454 2153523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:17:40.933815 2153523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:17:40.935189 2153523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:17:40.936761 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:17:40.937143 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.937209 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.952461 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0819 13:17:40.952868 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.953423 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.953455 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.953809 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.954009 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:40.954280 2153523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:17:40.954572 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.954610 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.970308 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0819 13:17:40.970901 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.971446 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.971478 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.971798 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.972002 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:41.006606 2153523 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 13:17:41.007689 2153523 start.go:297] selected driver: kvm2
	I0819 13:17:41.007712 2153523 start.go:901] validating driver "kvm2" against &{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.007878 2153523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:17:41.008964 2153523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.009068 2153523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:17:41.024433 2153523 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:17:41.024970 2153523 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:17:41.025027 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:17:41.025044 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:17:41.025112 2153523 start.go:340] cluster config:
	{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.025259 2153523 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.027074 2153523 out.go:177] * Starting "embed-certs-823470" primary control-plane node in "embed-certs-823470" cluster
	I0819 13:17:37.004434 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:41.028219 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:17:41.028273 2153523 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:17:41.028296 2153523 cache.go:56] Caching tarball of preloaded images
	I0819 13:17:41.028407 2153523 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:17:41.028424 2153523 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 13:17:41.028580 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:17:41.028861 2153523 start.go:360] acquireMachinesLock for embed-certs-823470: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:17:43.084440 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:46.156450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:52.236466 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:55.308454 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:01.388472 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:04.460482 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:10.540493 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:13.612449 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:19.692412 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:22.764436 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:28.844443 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:31.916476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:37.996470 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:41.068499 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:47.148488 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:50.220507 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:56.300450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:59.372511 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:05.452504 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:08.524521 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:14.604456 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:17.676406 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:23.756476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:26.828515 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:32.908445 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:35.980361 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:38.985083 2152365 start.go:364] duration metric: took 4m0.580875322s to acquireMachinesLock for "default-k8s-diff-port-038572"
	I0819 13:19:38.985152 2152365 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:38.985171 2152365 fix.go:54] fixHost starting: 
	I0819 13:19:38.985542 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:38.985578 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:39.001015 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I0819 13:19:39.001515 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:39.002024 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:19:39.002048 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:39.002379 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:39.002572 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:39.002734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:19:39.004320 2152365 fix.go:112] recreateIfNeeded on default-k8s-diff-port-038572: state=Stopped err=<nil>
	I0819 13:19:39.004361 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	W0819 13:19:39.004545 2152365 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:39.006073 2152365 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-038572" ...
	I0819 13:19:38.982266 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:38.982325 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982657 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:19:38.982688 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982924 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:19:38.984929 2151869 machine.go:96] duration metric: took 4m37.397817873s to provisionDockerMachine
	I0819 13:19:38.984974 2151869 fix.go:56] duration metric: took 4m37.421262151s for fixHost
	I0819 13:19:38.984981 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 4m37.421326953s
	W0819 13:19:38.985006 2151869 start.go:714] error starting host: provision: host is not running
	W0819 13:19:38.985119 2151869 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 13:19:38.985132 2151869 start.go:729] Will try again in 5 seconds ...
	I0819 13:19:39.007314 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Start
	I0819 13:19:39.007497 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring networks are active...
	I0819 13:19:39.008358 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network default is active
	I0819 13:19:39.008748 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network mk-default-k8s-diff-port-038572 is active
	I0819 13:19:39.009091 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Getting domain xml...
	I0819 13:19:39.009732 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Creating domain...
	I0819 13:19:40.228941 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting to get IP...
	I0819 13:19:40.229930 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230312 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230369 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.230279 2153980 retry.go:31] will retry after 222.94238ms: waiting for machine to come up
	I0819 13:19:40.455010 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455640 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.455562 2153980 retry.go:31] will retry after 314.241366ms: waiting for machine to come up
	I0819 13:19:40.771105 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771564 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771593 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.771511 2153980 retry.go:31] will retry after 475.191571ms: waiting for machine to come up
	I0819 13:19:41.248037 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248436 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248462 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.248377 2153980 retry.go:31] will retry after 547.125638ms: waiting for machine to come up
	I0819 13:19:41.797263 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797702 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.797653 2153980 retry.go:31] will retry after 581.6163ms: waiting for machine to come up
	I0819 13:19:42.380743 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381257 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:42.381195 2153980 retry.go:31] will retry after 661.080111ms: waiting for machine to come up
	I0819 13:19:43.044143 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044615 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044645 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.044566 2153980 retry.go:31] will retry after 815.359371ms: waiting for machine to come up
	I0819 13:19:43.986791 2151869 start.go:360] acquireMachinesLock for no-preload-441627: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:19:43.861792 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862277 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.862175 2153980 retry.go:31] will retry after 1.076851262s: waiting for machine to come up
	I0819 13:19:44.940944 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941544 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:44.941431 2153980 retry.go:31] will retry after 1.492394345s: waiting for machine to come up
	I0819 13:19:46.436200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436720 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:46.436617 2153980 retry.go:31] will retry after 1.567728586s: waiting for machine to come up
	I0819 13:19:48.006390 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006917 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:48.006810 2153980 retry.go:31] will retry after 1.799313771s: waiting for machine to come up
	I0819 13:19:49.807716 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808103 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808135 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:49.808029 2153980 retry.go:31] will retry after 2.910106345s: waiting for machine to come up
	I0819 13:19:52.722235 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722644 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:52.722554 2153980 retry.go:31] will retry after 2.846975794s: waiting for machine to come up
	I0819 13:19:56.801408 2152780 start.go:364] duration metric: took 3m10.884201023s to acquireMachinesLock for "old-k8s-version-620467"
	I0819 13:19:56.801488 2152780 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:56.801498 2152780 fix.go:54] fixHost starting: 
	I0819 13:19:56.802005 2152780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:56.802046 2152780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:56.819893 2152780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0819 13:19:56.820323 2152780 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:56.820785 2152780 main.go:141] libmachine: Using API Version  1
	I0819 13:19:56.820813 2152780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:56.821126 2152780 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:56.821314 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:19:56.821452 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetState
	I0819 13:19:56.823015 2152780 fix.go:112] recreateIfNeeded on old-k8s-version-620467: state=Stopped err=<nil>
	I0819 13:19:56.823056 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	W0819 13:19:56.823300 2152780 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:56.824822 2152780 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-620467" ...
	I0819 13:19:55.571761 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572317 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Found IP for machine: 192.168.72.69
	I0819 13:19:55.572351 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has current primary IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserving static IP address...
	I0819 13:19:55.572819 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.572863 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | skip adding static IP to network mk-default-k8s-diff-port-038572 - found existing host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"}
	I0819 13:19:55.572879 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserved static IP address: 192.168.72.69
	I0819 13:19:55.572897 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for SSH to be available...
	I0819 13:19:55.572914 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Getting to WaitForSSH function...
	I0819 13:19:55.575208 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575636 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.575662 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575751 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH client type: external
	I0819 13:19:55.575797 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa (-rw-------)
	I0819 13:19:55.575834 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:19:55.575851 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | About to run SSH command:
	I0819 13:19:55.575895 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | exit 0
	I0819 13:19:55.696082 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | SSH cmd err, output: <nil>: 
	I0819 13:19:55.696501 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetConfigRaw
	I0819 13:19:55.697226 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:55.699788 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700112 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.700146 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700429 2152365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/config.json ...
	I0819 13:19:55.700717 2152365 machine.go:93] provisionDockerMachine start ...
	I0819 13:19:55.700742 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:55.700975 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.703239 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703521 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.703556 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.703887 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704055 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.704360 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.704635 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.704651 2152365 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:19:55.804602 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:19:55.804632 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.804922 2152365 buildroot.go:166] provisioning hostname "default-k8s-diff-port-038572"
	I0819 13:19:55.804951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.805155 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.807793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.808144 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808318 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.808510 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808641 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808750 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.808888 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.809108 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.809123 2152365 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-038572 && echo "default-k8s-diff-port-038572" | sudo tee /etc/hostname
	I0819 13:19:55.922378 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-038572
	
	I0819 13:19:55.922428 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.925767 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926131 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.926172 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926350 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.926563 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926747 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.927130 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.927403 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.927433 2152365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-038572' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-038572/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-038572' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:19:56.033031 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:56.033082 2152365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:19:56.033108 2152365 buildroot.go:174] setting up certificates
	I0819 13:19:56.033121 2152365 provision.go:84] configureAuth start
	I0819 13:19:56.033132 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:56.033450 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.036371 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.036898 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.036926 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.037063 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.039494 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039814 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.039844 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039962 2152365 provision.go:143] copyHostCerts
	I0819 13:19:56.040055 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:19:56.040073 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:19:56.040181 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:19:56.040329 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:19:56.040342 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:19:56.040394 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:19:56.040582 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:19:56.040611 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:19:56.040666 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:19:56.040752 2152365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-038572 san=[127.0.0.1 192.168.72.69 default-k8s-diff-port-038572 localhost minikube]
	I0819 13:19:56.155224 2152365 provision.go:177] copyRemoteCerts
	I0819 13:19:56.155322 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:19:56.155366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.157977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158360 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.158393 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158573 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.158780 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.159002 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.159203 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.238700 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:19:56.263555 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0819 13:19:56.288212 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:19:56.311817 2152365 provision.go:87] duration metric: took 278.679594ms to configureAuth
	I0819 13:19:56.311851 2152365 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:19:56.312067 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:19:56.312187 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.315149 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315598 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.315630 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315801 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.315977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316115 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316244 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.316426 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.316653 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.316770 2152365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:19:56.575936 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:19:56.575977 2152365 machine.go:96] duration metric: took 875.241726ms to provisionDockerMachine
	I0819 13:19:56.576012 2152365 start.go:293] postStartSetup for "default-k8s-diff-port-038572" (driver="kvm2")
	I0819 13:19:56.576030 2152365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:19:56.576068 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.576426 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:19:56.576468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.579017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579420 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.579453 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579553 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.579726 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.579891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.580030 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.661017 2152365 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:19:56.665287 2152365 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:19:56.665315 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:19:56.665387 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:19:56.665469 2152365 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:19:56.665559 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:19:56.675115 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:19:56.698728 2152365 start.go:296] duration metric: took 122.697439ms for postStartSetup
	I0819 13:19:56.698771 2152365 fix.go:56] duration metric: took 17.713602497s for fixHost
	I0819 13:19:56.698793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.701384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701733 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.701766 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.702116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702284 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702444 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.702625 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.702822 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.702836 2152365 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:19:56.801168 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073596.772802095
	
	I0819 13:19:56.801210 2152365 fix.go:216] guest clock: 1724073596.772802095
	I0819 13:19:56.801223 2152365 fix.go:229] Guest: 2024-08-19 13:19:56.772802095 +0000 UTC Remote: 2024-08-19 13:19:56.698775345 +0000 UTC m=+258.438276725 (delta=74.02675ms)
	I0819 13:19:56.801278 2152365 fix.go:200] guest clock delta is within tolerance: 74.02675ms
	I0819 13:19:56.801287 2152365 start.go:83] releasing machines lock for "default-k8s-diff-port-038572", held for 17.81615508s
	I0819 13:19:56.801326 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.801669 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.804805 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805198 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.805242 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805377 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806035 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806291 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806387 2152365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:19:56.806442 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.806562 2152365 ssh_runner.go:195] Run: cat /version.json
	I0819 13:19:56.806592 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.809097 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809446 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.809476 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809513 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809599 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.809776 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.809951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810028 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.810053 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.810115 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.810256 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.810456 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.810627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810777 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.905573 2152365 ssh_runner.go:195] Run: systemctl --version
	I0819 13:19:56.911395 2152365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:19:57.053460 2152365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:19:57.060012 2152365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:19:57.060091 2152365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:19:57.076651 2152365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:19:57.076678 2152365 start.go:495] detecting cgroup driver to use...
	I0819 13:19:57.076746 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:19:57.095982 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:19:57.109198 2152365 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:19:57.109282 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:19:57.124552 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:19:57.141665 2152365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:19:57.250580 2152365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:19:57.395902 2152365 docker.go:233] disabling docker service ...
	I0819 13:19:57.395986 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:19:57.411232 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:19:57.424263 2152365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:19:57.576953 2152365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:19:57.695946 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:19:57.709501 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:19:57.727815 2152365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:19:57.727889 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.738210 2152365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:19:57.738284 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.748534 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.759949 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.770744 2152365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:19:57.781844 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.792047 2152365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.810916 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.826741 2152365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:19:57.839687 2152365 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:19:57.839753 2152365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:19:57.855192 2152365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:19:57.865326 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:19:57.991646 2152365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:19:58.136425 2152365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:19:58.136522 2152365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:19:58.141971 2152365 start.go:563] Will wait 60s for crictl version
	I0819 13:19:58.142047 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:19:58.146029 2152365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:19:58.182474 2152365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:19:58.182580 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.212887 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.242578 2152365 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:19:58.243711 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:58.246652 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.246985 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:58.247031 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.247262 2152365 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 13:19:58.251445 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:19:58.266177 2152365 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:19:58.266324 2152365 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:19:58.266417 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:19:56.826004 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .Start
	I0819 13:19:56.826161 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring networks are active...
	I0819 13:19:56.826953 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network default is active
	I0819 13:19:56.827328 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network mk-old-k8s-version-620467 is active
	I0819 13:19:56.827789 2152780 main.go:141] libmachine: (old-k8s-version-620467) Getting domain xml...
	I0819 13:19:56.828485 2152780 main.go:141] libmachine: (old-k8s-version-620467) Creating domain...
	I0819 13:19:58.099058 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting to get IP...
	I0819 13:19:58.099866 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.100256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.100359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.100246 2154107 retry.go:31] will retry after 217.468132ms: waiting for machine to come up
	I0819 13:19:58.319821 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.320335 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.320359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.320295 2154107 retry.go:31] will retry after 270.947251ms: waiting for machine to come up
	I0819 13:19:58.592897 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.593586 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.593610 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.593526 2154107 retry.go:31] will retry after 389.864812ms: waiting for machine to come up
	I0819 13:19:58.985269 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.985722 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.985757 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.985674 2154107 retry.go:31] will retry after 559.916587ms: waiting for machine to come up
	I0819 13:19:59.547846 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:59.548504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:59.548538 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:59.548446 2154107 retry.go:31] will retry after 759.198751ms: waiting for machine to come up
	I0819 13:20:00.309569 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:00.310093 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:00.310126 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:00.310041 2154107 retry.go:31] will retry after 836.488095ms: waiting for machine to come up
	I0819 13:19:58.306336 2152365 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:19:58.306490 2152365 ssh_runner.go:195] Run: which lz4
	I0819 13:19:58.311147 2152365 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:19:58.315715 2152365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:19:58.315751 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:19:59.751536 2152365 crio.go:462] duration metric: took 1.440444093s to copy over tarball
	I0819 13:19:59.751611 2152365 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:01.909604 2152365 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157959479s)
	I0819 13:20:01.909646 2152365 crio.go:469] duration metric: took 2.158078046s to extract the tarball
	I0819 13:20:01.909667 2152365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:01.954789 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:02.002755 2152365 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:02.002786 2152365 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:02.002805 2152365 kubeadm.go:934] updating node { 192.168.72.69 8444 v1.31.0 crio true true} ...
	I0819 13:20:02.002970 2152365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-038572 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:02.003061 2152365 ssh_runner.go:195] Run: crio config
	I0819 13:20:02.061207 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:02.061238 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:02.061256 2152365 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:02.061295 2152365 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.69 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-038572 NodeName:default-k8s-diff-port-038572 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:02.061496 2152365 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.69
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-038572"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:02.061628 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:02.072573 2152365 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:02.072676 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:02.083114 2152365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0819 13:20:02.100736 2152365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:02.117359 2152365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 13:20:02.135642 2152365 ssh_runner.go:195] Run: grep 192.168.72.69	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:02.139682 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:02.152242 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:02.278949 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:02.296896 2152365 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572 for IP: 192.168.72.69
	I0819 13:20:02.296925 2152365 certs.go:194] generating shared ca certs ...
	I0819 13:20:02.296949 2152365 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:02.297166 2152365 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:02.297230 2152365 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:02.297248 2152365 certs.go:256] generating profile certs ...
	I0819 13:20:02.297375 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/client.key
	I0819 13:20:02.297441 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key.ca317f39
	I0819 13:20:02.297489 2152365 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key
	I0819 13:20:02.297641 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:02.297692 2152365 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:02.297705 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:02.297730 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:02.297752 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:02.297778 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:02.297821 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:02.298812 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:02.332660 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:02.376995 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:02.418243 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:02.455354 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 13:20:02.488839 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:02.514418 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:02.540797 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:20:02.565628 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:02.590617 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:02.614942 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:02.639455 2152365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:02.657597 2152365 ssh_runner.go:195] Run: openssl version
	I0819 13:20:02.663545 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:02.674396 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679061 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679138 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.685136 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:02.696428 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:02.707271 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.711988 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.712052 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.717933 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:02.728982 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:02.739866 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744728 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744799 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.750660 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:02.761203 2152365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:02.765996 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:02.772371 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:02.778453 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:02.784645 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:02.790342 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:02.795882 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:02.801355 2152365 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:02.801448 2152365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:02.801536 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.839586 2152365 cri.go:89] found id: ""
	I0819 13:20:02.839684 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:02.850336 2152365 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:02.850363 2152365 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:02.850424 2152365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:02.860676 2152365 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:02.861753 2152365 kubeconfig.go:125] found "default-k8s-diff-port-038572" server: "https://192.168.72.69:8444"
	I0819 13:20:02.863979 2152365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:02.875284 2152365 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.69
	I0819 13:20:02.875321 2152365 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:02.875337 2152365 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:02.875398 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.914277 2152365 cri.go:89] found id: ""
	I0819 13:20:02.914383 2152365 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:02.931976 2152365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:02.941897 2152365 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:02.941924 2152365 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:02.941994 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 13:20:02.951103 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:02.951179 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:02.960642 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 13:20:02.970204 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:02.970278 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:02.980235 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.989306 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:02.989371 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.998821 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 13:20:03.007907 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:03.007978 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:03.017698 2152365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:03.027048 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:03.158491 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:01.148388 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:01.148845 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:01.148874 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:01.148801 2154107 retry.go:31] will retry after 1.045849266s: waiting for machine to come up
	I0819 13:20:02.196768 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:02.197307 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:02.197341 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:02.197243 2154107 retry.go:31] will retry after 1.09397017s: waiting for machine to come up
	I0819 13:20:03.292733 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:03.293195 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:03.293233 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:03.293141 2154107 retry.go:31] will retry after 1.29393597s: waiting for machine to come up
	I0819 13:20:04.588965 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:04.589429 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:04.589463 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:04.589356 2154107 retry.go:31] will retry after 1.963033352s: waiting for machine to come up
	I0819 13:20:04.318435 2152365 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.159898195s)
	I0819 13:20:04.318475 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.536005 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.600370 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.701830 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:04.701954 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.202875 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.702925 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.718529 2152365 api_server.go:72] duration metric: took 1.01671668s to wait for apiserver process to appear ...
	I0819 13:20:05.718570 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:05.718598 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.393400 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.393440 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.393459 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.434960 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.434992 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.719450 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.725655 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:08.725689 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.218770 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.224234 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:09.224273 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.718902 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.727048 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:20:09.733609 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:09.733641 2152365 api_server.go:131] duration metric: took 4.015063389s to wait for apiserver health ...
	I0819 13:20:09.733652 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:09.733660 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:09.735422 2152365 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:06.553692 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:06.554176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:06.554208 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:06.554119 2154107 retry.go:31] will retry after 2.420122298s: waiting for machine to come up
	I0819 13:20:08.975682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:08.976129 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:08.976176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:08.976073 2154107 retry.go:31] will retry after 2.64333029s: waiting for machine to come up
	I0819 13:20:09.736901 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:09.747568 2152365 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:09.766156 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:09.777905 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:09.777944 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:09.777953 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:09.777963 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:09.777978 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:09.777990 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:20:09.778006 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:09.778017 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:09.778029 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:20:09.778040 2152365 system_pods.go:74] duration metric: took 11.855811ms to wait for pod list to return data ...
	I0819 13:20:09.778050 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:09.787946 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:09.787991 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:09.788010 2152365 node_conditions.go:105] duration metric: took 9.95462ms to run NodePressure ...
	I0819 13:20:09.788035 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:10.079231 2152365 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083839 2152365 kubeadm.go:739] kubelet initialised
	I0819 13:20:10.083865 2152365 kubeadm.go:740] duration metric: took 4.599976ms waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083876 2152365 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:10.089020 2152365 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.094701 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094738 2152365 pod_ready.go:82] duration metric: took 5.686754ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.094751 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094760 2152365 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.099257 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099292 2152365 pod_ready.go:82] duration metric: took 4.514562ms for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.099308 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099317 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.103745 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103778 2152365 pod_ready.go:82] duration metric: took 4.449246ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.103799 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103811 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.170603 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170636 2152365 pod_ready.go:82] duration metric: took 66.80935ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.170649 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170655 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.569837 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569869 2152365 pod_ready.go:82] duration metric: took 399.203757ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.569879 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569892 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.970515 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970553 2152365 pod_ready.go:82] duration metric: took 400.651091ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.970571 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970600 2152365 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:11.371258 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371306 2152365 pod_ready.go:82] duration metric: took 400.692872ms for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:11.371324 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371335 2152365 pod_ready.go:39] duration metric: took 1.287441542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:11.371365 2152365 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:20:11.387535 2152365 ops.go:34] apiserver oom_adj: -16
	I0819 13:20:11.387559 2152365 kubeadm.go:597] duration metric: took 8.537187666s to restartPrimaryControlPlane
	I0819 13:20:11.387568 2152365 kubeadm.go:394] duration metric: took 8.586226152s to StartCluster
	I0819 13:20:11.387585 2152365 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.387664 2152365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:11.389218 2152365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.389474 2152365 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:20:11.389580 2152365 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:20:11.389706 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:11.389711 2152365 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389755 2152365 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-038572"
	I0819 13:20:11.389764 2152365 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-038572"
	W0819 13:20:11.389767 2152365 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:20:11.389751 2152365 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389804 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.389806 2152365 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.389818 2152365 addons.go:243] addon metrics-server should already be in state true
	I0819 13:20:11.389827 2152365 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-038572"
	I0819 13:20:11.389849 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.390217 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390246 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390254 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390283 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390305 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390341 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.391417 2152365 out.go:177] * Verifying Kubernetes components...
	I0819 13:20:11.392645 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:11.406609 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0819 13:20:11.407081 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.407694 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.407723 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.408143 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.408421 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.411206 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0819 13:20:11.411468 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0819 13:20:11.411740 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.411900 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.412296 2152365 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.412319 2152365 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:20:11.412335 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412351 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.412393 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412482 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412508 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412744 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.412758 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.412793 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.412794 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.413326 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413350 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413374 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.413399 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.429280 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40269
	I0819 13:20:11.429783 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.430330 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.430354 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.430797 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.430981 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.433018 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.433214 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I0819 13:20:11.433651 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.433708 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0819 13:20:11.434138 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434152 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434199 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.434455 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.434636 2152365 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:11.434806 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434830 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434988 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.435019 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.435164 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.435375 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.435924 2152365 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.435944 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:20:11.435965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.437309 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.438592 2152365 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:20:11.439482 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.439584 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:20:11.439598 2152365 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:20:11.439617 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.439905 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.439943 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.440077 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.440243 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.440399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.440517 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.442627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.442965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.442990 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.443134 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.443274 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.443500 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.443627 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.457318 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0819 13:20:11.457870 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.458444 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.458481 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.458854 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.459017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.460870 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.461105 2152365 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.461121 2152365 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:20:11.461138 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.463902 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.464415 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464581 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.464764 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.464942 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.465089 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.620783 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:11.641158 2152365 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:11.745067 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.783221 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:20:11.783260 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:20:11.791134 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.848325 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:20:11.848359 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:20:11.955559 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:11.955670 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:20:12.022022 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:12.892823 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.101637914s)
	I0819 13:20:12.892896 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.892909 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.892966 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147858081s)
	I0819 13:20:12.893019 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893033 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893201 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893327 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893354 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893366 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893375 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893377 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893386 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893391 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893630 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893681 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893704 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893677 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893729 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893657 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.899954 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.899970 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.900202 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.900220 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.900223 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972429 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972465 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.972842 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972866 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.972882 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.972897 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.973181 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.973225 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.973235 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.973289 2152365 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-038572"
	I0819 13:20:12.975602 2152365 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:20:12.976614 2152365 addons.go:510] duration metric: took 1.587039687s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:20:15.704949 2153523 start.go:364] duration metric: took 2m34.676043713s to acquireMachinesLock for "embed-certs-823470"
	I0819 13:20:15.705034 2153523 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:15.705058 2153523 fix.go:54] fixHost starting: 
	I0819 13:20:15.705598 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:15.705649 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:15.726423 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0819 13:20:15.726876 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:15.727565 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:20:15.727603 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:15.727986 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:15.728273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:15.728457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:20:15.730123 2153523 fix.go:112] recreateIfNeeded on embed-certs-823470: state=Stopped err=<nil>
	I0819 13:20:15.730167 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	W0819 13:20:15.730354 2153523 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:15.732004 2153523 out.go:177] * Restarting existing kvm2 VM for "embed-certs-823470" ...
	I0819 13:20:11.621327 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:11.621751 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:11.621783 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:11.621716 2154107 retry.go:31] will retry after 2.756598641s: waiting for machine to come up
	I0819 13:20:14.379562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380111 2152780 main.go:141] libmachine: (old-k8s-version-620467) Found IP for machine: 192.168.39.238
	I0819 13:20:14.380135 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserving static IP address...
	I0819 13:20:14.380149 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has current primary IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380616 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.380648 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserved static IP address: 192.168.39.238
	I0819 13:20:14.380673 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | skip adding static IP to network mk-old-k8s-version-620467 - found existing host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"}
	I0819 13:20:14.380687 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Getting to WaitForSSH function...
	I0819 13:20:14.380699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting for SSH to be available...
	I0819 13:20:14.383088 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383484 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.383516 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383650 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH client type: external
	I0819 13:20:14.383682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa (-rw-------)
	I0819 13:20:14.383720 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:14.383731 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | About to run SSH command:
	I0819 13:20:14.383745 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | exit 0
	I0819 13:20:14.508402 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:14.508766 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetConfigRaw
	I0819 13:20:14.509427 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.512053 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.512573 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512759 2152780 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/config.json ...
	I0819 13:20:14.512944 2152780 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:14.512962 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:14.513159 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.515266 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515554 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.515588 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515743 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.515901 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516039 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516142 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.516308 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.516582 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.516597 2152780 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:14.628611 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:14.628650 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.628968 2152780 buildroot.go:166] provisioning hostname "old-k8s-version-620467"
	I0819 13:20:14.629003 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.629202 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.631844 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632148 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.632211 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632516 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.632706 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632870 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632975 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.633140 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.633383 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.633400 2152780 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-620467 && echo "old-k8s-version-620467" | sudo tee /etc/hostname
	I0819 13:20:14.759803 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-620467
	
	I0819 13:20:14.759840 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.763058 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763513 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.763553 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763807 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.764027 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764208 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764523 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.764729 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.764986 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.765013 2152780 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-620467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-620467/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-620467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:14.890791 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:14.890828 2152780 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:14.890861 2152780 buildroot.go:174] setting up certificates
	I0819 13:20:14.890877 2152780 provision.go:84] configureAuth start
	I0819 13:20:14.890890 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.891337 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.894171 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894527 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.894562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894695 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.897240 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.897791 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.897825 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.898011 2152780 provision.go:143] copyHostCerts
	I0819 13:20:14.898080 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:14.898104 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:14.898169 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:14.898317 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:14.898330 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:14.898360 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:14.898434 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:14.898444 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:14.898472 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:14.898542 2152780 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-620467 san=[127.0.0.1 192.168.39.238 localhost minikube old-k8s-version-620467]
	I0819 13:20:15.009796 2152780 provision.go:177] copyRemoteCerts
	I0819 13:20:15.009880 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:15.009931 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.012857 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013153 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.013187 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013486 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.013699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.013862 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.013990 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.104066 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:15.131682 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 13:20:15.158717 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:15.186486 2152780 provision.go:87] duration metric: took 295.591794ms to configureAuth
	I0819 13:20:15.186518 2152780 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:15.186739 2152780 config.go:182] Loaded profile config "old-k8s-version-620467": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 13:20:15.186833 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.189580 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.189997 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.190028 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.190212 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.190389 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190528 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190646 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.190784 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.190958 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.190973 2152780 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:15.461836 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:15.461866 2152780 machine.go:96] duration metric: took 948.910153ms to provisionDockerMachine
	I0819 13:20:15.461878 2152780 start.go:293] postStartSetup for "old-k8s-version-620467" (driver="kvm2")
	I0819 13:20:15.461894 2152780 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:15.461913 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.462313 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:15.462348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.465294 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465707 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.465743 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465949 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.466136 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.466302 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.466443 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.551428 2152780 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:15.556041 2152780 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:15.556063 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:15.556137 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:15.556267 2152780 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:15.556410 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:15.565971 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:15.592103 2152780 start.go:296] duration metric: took 130.206992ms for postStartSetup
	I0819 13:20:15.592153 2152780 fix.go:56] duration metric: took 18.790655174s for fixHost
	I0819 13:20:15.592201 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.595223 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595583 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.595623 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595769 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.596033 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596249 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596391 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.596578 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.596767 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.596780 2152780 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:15.704773 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073615.677156491
	
	I0819 13:20:15.704802 2152780 fix.go:216] guest clock: 1724073615.677156491
	I0819 13:20:15.704811 2152780 fix.go:229] Guest: 2024-08-19 13:20:15.677156491 +0000 UTC Remote: 2024-08-19 13:20:15.592158206 +0000 UTC m=+209.822213223 (delta=84.998285ms)
	I0819 13:20:15.704854 2152780 fix.go:200] guest clock delta is within tolerance: 84.998285ms
	I0819 13:20:15.704862 2152780 start.go:83] releasing machines lock for "old-k8s-version-620467", held for 18.903410478s
	I0819 13:20:15.704893 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.705211 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:15.708190 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708528 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.708558 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708733 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709538 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709631 2152780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:15.709682 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.709793 2152780 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:15.709821 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.712366 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712759 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.712787 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712818 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712943 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713113 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713309 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.713374 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.713407 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.713481 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.713573 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713714 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713872 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.714033 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.732993 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Start
	I0819 13:20:15.733180 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring networks are active...
	I0819 13:20:15.734170 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network default is active
	I0819 13:20:15.734566 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network mk-embed-certs-823470 is active
	I0819 13:20:15.734992 2153523 main.go:141] libmachine: (embed-certs-823470) Getting domain xml...
	I0819 13:20:15.735675 2153523 main.go:141] libmachine: (embed-certs-823470) Creating domain...
	I0819 13:20:15.818839 2152780 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:15.827091 2152780 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:15.989552 2152780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:15.995991 2152780 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:15.996067 2152780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:16.012594 2152780 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:16.012622 2152780 start.go:495] detecting cgroup driver to use...
	I0819 13:20:16.012691 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:16.030071 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:16.048651 2152780 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:16.048715 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:16.063500 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:16.078370 2152780 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:16.194353 2152780 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:16.347921 2152780 docker.go:233] disabling docker service ...
	I0819 13:20:16.347992 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:16.363528 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:16.377515 2152780 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:16.526660 2152780 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:16.672207 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:16.686323 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:16.707555 2152780 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 13:20:16.707634 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.718679 2152780 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:16.718775 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.729446 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.740145 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.751813 2152780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:16.767680 2152780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:16.779680 2152780 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:16.779754 2152780 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:16.798205 2152780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:16.812684 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:16.999666 2152780 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:17.159585 2152780 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:17.159673 2152780 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:17.165707 2152780 start.go:563] Will wait 60s for crictl version
	I0819 13:20:17.165781 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:17.170949 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:17.222170 2152780 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:17.222248 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.250627 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.278578 2152780 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 13:20:13.644589 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:15.645507 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:18.146247 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:17.279632 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:17.282855 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:17.283289 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283501 2152780 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:17.288251 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:17.301778 2152780 kubeadm.go:883] updating cluster {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:17.301950 2152780 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 13:20:17.302022 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:17.348389 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:17.348477 2152780 ssh_runner.go:195] Run: which lz4
	I0819 13:20:17.352759 2152780 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:17.357046 2152780 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:17.357075 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 13:20:19.106587 2152780 crio.go:462] duration metric: took 1.753868264s to copy over tarball
	I0819 13:20:19.106686 2152780 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:17.131501 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting to get IP...
	I0819 13:20:17.132786 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.133215 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.133322 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.133191 2154333 retry.go:31] will retry after 299.991818ms: waiting for machine to come up
	I0819 13:20:17.434764 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.435181 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.435207 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.435148 2154333 retry.go:31] will retry after 236.856343ms: waiting for machine to come up
	I0819 13:20:17.673895 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.674570 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.674599 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.674464 2154333 retry.go:31] will retry after 393.585081ms: waiting for machine to come up
	I0819 13:20:18.070571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.071042 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.071102 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.070993 2154333 retry.go:31] will retry after 544.808617ms: waiting for machine to come up
	I0819 13:20:18.617933 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.618430 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.618462 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.618383 2154333 retry.go:31] will retry after 679.633919ms: waiting for machine to come up
	I0819 13:20:19.299719 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.300300 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.300332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.300244 2154333 retry.go:31] will retry after 612.319499ms: waiting for machine to come up
	I0819 13:20:19.914955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.915514 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.915550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.915448 2154333 retry.go:31] will retry after 831.709673ms: waiting for machine to come up
	I0819 13:20:20.748907 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:20.749441 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:20.749470 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:20.749386 2154333 retry.go:31] will retry after 1.039050841s: waiting for machine to come up
	I0819 13:20:19.147118 2152365 node_ready.go:49] node "default-k8s-diff-port-038572" has status "Ready":"True"
	I0819 13:20:19.147148 2152365 node_ready.go:38] duration metric: took 7.505954816s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:19.147159 2152365 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:19.156331 2152365 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.164963 2152365 pod_ready.go:93] pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:19.164994 2152365 pod_ready.go:82] duration metric: took 8.624394ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.165007 2152365 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:21.171762 2152365 pod_ready.go:103] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:22.814104 2152365 pod_ready.go:93] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.814140 2152365 pod_ready.go:82] duration metric: took 3.649123155s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.814157 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820468 2152365 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.820494 2152365 pod_ready.go:82] duration metric: took 6.328831ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820505 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826256 2152365 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.826288 2152365 pod_ready.go:82] duration metric: took 5.775237ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826302 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832133 2152365 pod_ready.go:93] pod "kube-proxy-wnkk2" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.832182 2152365 pod_ready.go:82] duration metric: took 5.850742ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832198 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837910 2152365 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.837938 2152365 pod_ready.go:82] duration metric: took 5.730926ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837952 2152365 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.098419 2152780 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.991694091s)
	I0819 13:20:22.098466 2152780 crio.go:469] duration metric: took 2.991838198s to extract the tarball
	I0819 13:20:22.098476 2152780 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:22.143215 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:22.183884 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:22.183917 2152780 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:22.183992 2152780 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.184012 2152780 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.184040 2152780 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 13:20:22.184050 2152780 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.183999 2152780 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.184002 2152780 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.184087 2152780 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.184133 2152780 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185819 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.185861 2152780 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 13:20:22.185868 2152780 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.185909 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.185940 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185825 2152780 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.355834 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.356969 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.361175 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.382402 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.393972 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.417644 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 13:20:22.451269 2152780 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 13:20:22.451335 2152780 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.451333 2152780 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 13:20:22.451374 2152780 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.451393 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.451428 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.479570 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.490369 2152780 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 13:20:22.490426 2152780 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.490484 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.519930 2152780 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 13:20:22.519990 2152780 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.520062 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.527794 2152780 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 13:20:22.527836 2152780 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.527922 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540287 2152780 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 13:20:22.540337 2152780 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 13:20:22.540362 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.540381 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540423 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665204 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.665271 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.665317 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.665410 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665441 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.665481 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.725932 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.821227 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.825977 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.826179 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.826276 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.826365 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.826517 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.908181 2152780 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 13:20:22.908249 2152780 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.908306 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.974136 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:23.008192 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 13:20:23.008282 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:23.008318 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:23.008372 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 13:20:23.008406 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:23.008416 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.041323 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 13:20:23.119173 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 13:20:23.119212 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.119216 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 13:20:23.119787 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 13:20:23.158596 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.193414 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 13:20:23.193510 2152780 cache_images.go:92] duration metric: took 1.009576721s to LoadCachedImages
	W0819 13:20:23.193613 2152780 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0819 13:20:23.193632 2152780 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.20.0 crio true true} ...
	I0819 13:20:23.193746 2152780 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-620467 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:23.193835 2152780 ssh_runner.go:195] Run: crio config
	I0819 13:20:23.255630 2152780 cni.go:84] Creating CNI manager for ""
	I0819 13:20:23.255661 2152780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:23.255677 2152780 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:23.255711 2152780 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-620467 NodeName:old-k8s-version-620467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 13:20:23.255897 2152780 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-620467"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:23.255980 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 13:20:23.269531 2152780 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:23.269612 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:23.282542 2152780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0819 13:20:23.302755 2152780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:23.322417 2152780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 13:20:23.342984 2152780 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:23.347053 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:23.360095 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:23.497495 2152780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:23.517984 2152780 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467 for IP: 192.168.39.238
	I0819 13:20:23.518059 2152780 certs.go:194] generating shared ca certs ...
	I0819 13:20:23.518125 2152780 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:23.518359 2152780 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:23.518429 2152780 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:23.518442 2152780 certs.go:256] generating profile certs ...
	I0819 13:20:23.518572 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.key
	I0819 13:20:23.518651 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key.53723c14
	I0819 13:20:23.518701 2152780 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key
	I0819 13:20:23.518856 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:23.518895 2152780 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:23.518909 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:23.518941 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:23.518969 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:23.519009 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:23.519073 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:23.519828 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:23.574690 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:23.616732 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:23.659531 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:23.706724 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 13:20:23.742394 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:23.774793 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:23.801731 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:23.827344 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:23.855338 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:23.880700 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:23.906324 2152780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:23.923792 2152780 ssh_runner.go:195] Run: openssl version
	I0819 13:20:23.929722 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:23.940623 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946214 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946316 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.952580 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:23.963550 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:23.974639 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979429 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979515 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.985465 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:23.996822 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:24.008876 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013845 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013909 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.019736 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:24.032573 2152780 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:24.038620 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:24.046070 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:24.052573 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:24.059242 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:24.065487 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:24.072116 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:24.078756 2152780 kubeadm.go:392] StartCluster: {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:24.078878 2152780 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:24.078950 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.122078 2152780 cri.go:89] found id: ""
	I0819 13:20:24.122170 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:24.133909 2152780 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:24.133932 2152780 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:24.133989 2152780 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:24.144440 2152780 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:24.145449 2152780 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-620467" does not appear in /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:24.146113 2152780 kubeconfig.go:62] /home/jenkins/minikube-integration/19479-2086935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-620467" cluster setting kubeconfig missing "old-k8s-version-620467" context setting]
	I0819 13:20:24.147113 2152780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:24.226327 2152780 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:24.236890 2152780 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.238
	I0819 13:20:24.236922 2152780 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:24.236936 2152780 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:24.236978 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.274144 2152780 cri.go:89] found id: ""
	I0819 13:20:24.274220 2152780 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:24.291049 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:24.302678 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:24.302699 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:24.302746 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:24.312055 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:24.312122 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:24.321518 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:24.330507 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:24.330570 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:24.340192 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.349855 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:24.349914 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.359243 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:24.369020 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:24.369100 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:24.379184 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:24.389785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:24.524009 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.159266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.413924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.519529 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.635520 2152780 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:25.635627 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:21.790800 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:21.791370 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:21.791400 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:21.791324 2154333 retry.go:31] will retry after 1.360798876s: waiting for machine to come up
	I0819 13:20:23.153492 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:23.153913 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:23.153942 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:23.153859 2154333 retry.go:31] will retry after 2.180186803s: waiting for machine to come up
	I0819 13:20:25.337305 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:25.337871 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:25.337903 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:25.337719 2154333 retry.go:31] will retry after 1.933341809s: waiting for machine to come up
	I0819 13:20:24.846640 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:27.344995 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:26.135764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:26.636618 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.136024 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.636046 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.135975 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.636411 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.136478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.636585 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.136183 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.636717 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.273276 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:27.273766 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:27.273796 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:27.273706 2154333 retry.go:31] will retry after 3.242792205s: waiting for machine to come up
	I0819 13:20:30.520205 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:30.520644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:30.520679 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:30.520585 2154333 retry.go:31] will retry after 2.921954134s: waiting for machine to come up
	I0819 13:20:29.843436 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:31.843702 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.021186 2151869 start.go:364] duration metric: took 51.034331815s to acquireMachinesLock for "no-preload-441627"
	I0819 13:20:35.021266 2151869 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:35.021275 2151869 fix.go:54] fixHost starting: 
	I0819 13:20:35.021747 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:35.021788 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:35.039025 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0819 13:20:35.039480 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:35.039951 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:20:35.039978 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:35.040398 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:35.040610 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:35.040779 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:20:35.042642 2151869 fix.go:112] recreateIfNeeded on no-preload-441627: state=Stopped err=<nil>
	I0819 13:20:35.042670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	W0819 13:20:35.042852 2151869 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:35.044620 2151869 out.go:177] * Restarting existing kvm2 VM for "no-preload-441627" ...
	I0819 13:20:31.136686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:31.636239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.136372 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.635927 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.136039 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.635721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.136156 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.636069 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.136557 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.636478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.446149 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.446796 2153523 main.go:141] libmachine: (embed-certs-823470) Found IP for machine: 192.168.61.50
	I0819 13:20:33.446825 2153523 main.go:141] libmachine: (embed-certs-823470) Reserving static IP address...
	I0819 13:20:33.446843 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has current primary IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.447326 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.447357 2153523 main.go:141] libmachine: (embed-certs-823470) Reserved static IP address: 192.168.61.50
	I0819 13:20:33.447375 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | skip adding static IP to network mk-embed-certs-823470 - found existing host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"}
	I0819 13:20:33.447390 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Getting to WaitForSSH function...
	I0819 13:20:33.447402 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting for SSH to be available...
	I0819 13:20:33.449644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.449934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.449966 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.450063 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH client type: external
	I0819 13:20:33.450080 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa (-rw-------)
	I0819 13:20:33.450128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:33.450148 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | About to run SSH command:
	I0819 13:20:33.450160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | exit 0
	I0819 13:20:33.580225 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:33.580660 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetConfigRaw
	I0819 13:20:33.581310 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.583950 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584345 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.584382 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584697 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:20:33.584932 2153523 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:33.584961 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:33.585138 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.587429 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587735 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.587768 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587903 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.588123 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588333 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588507 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.588675 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.588873 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.588886 2153523 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:33.696497 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:33.696528 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696770 2153523 buildroot.go:166] provisioning hostname "embed-certs-823470"
	I0819 13:20:33.696788 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.699605 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.699955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.699991 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.700129 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.700374 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700531 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700668 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.700822 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.701007 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.701021 2153523 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-823470 && echo "embed-certs-823470" | sudo tee /etc/hostname
	I0819 13:20:33.826398 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-823470
	
	I0819 13:20:33.826444 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.829353 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.829751 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.829791 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.830030 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.830215 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830383 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830504 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.830662 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.830879 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.830896 2153523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-823470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-823470/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-823470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:33.952843 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:33.952881 2153523 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:33.952908 2153523 buildroot.go:174] setting up certificates
	I0819 13:20:33.952920 2153523 provision.go:84] configureAuth start
	I0819 13:20:33.952930 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.953214 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.955916 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956302 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.956332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956520 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.958787 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959101 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.959128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959257 2153523 provision.go:143] copyHostCerts
	I0819 13:20:33.959321 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:33.959341 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:33.959391 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:33.959505 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:33.959516 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:33.959537 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:33.959593 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:33.959600 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:33.959618 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:33.959670 2153523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.embed-certs-823470 san=[127.0.0.1 192.168.61.50 embed-certs-823470 localhost minikube]
	I0819 13:20:34.309596 2153523 provision.go:177] copyRemoteCerts
	I0819 13:20:34.309658 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:34.309693 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.312341 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312722 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.312757 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312897 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.313100 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.313221 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.313349 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.400305 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:34.423939 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:34.447427 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:34.474337 2153523 provision.go:87] duration metric: took 521.386141ms to configureAuth
	I0819 13:20:34.474374 2153523 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:34.474559 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:34.474655 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.477282 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477721 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.477754 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.478108 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478296 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478443 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.478661 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.478864 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.478886 2153523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:34.768186 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:34.768232 2153523 machine.go:96] duration metric: took 1.183282981s to provisionDockerMachine
	I0819 13:20:34.768244 2153523 start.go:293] postStartSetup for "embed-certs-823470" (driver="kvm2")
	I0819 13:20:34.768255 2153523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:34.768273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:34.768593 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:34.768626 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.771022 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771451 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.771485 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771636 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.771820 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.771979 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.772095 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.862868 2153523 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:34.867619 2153523 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:34.867659 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:34.867746 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:34.867837 2153523 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:34.867931 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:34.877409 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:34.907227 2153523 start.go:296] duration metric: took 138.963836ms for postStartSetup
	I0819 13:20:34.907285 2153523 fix.go:56] duration metric: took 19.202241044s for fixHost
	I0819 13:20:34.907315 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.909900 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910238 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.910273 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910466 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.910671 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910832 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910952 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.911132 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.911306 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.911316 2153523 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:35.020989 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073634.971972974
	
	I0819 13:20:35.021036 2153523 fix.go:216] guest clock: 1724073634.971972974
	I0819 13:20:35.021046 2153523 fix.go:229] Guest: 2024-08-19 13:20:34.971972974 +0000 UTC Remote: 2024-08-19 13:20:34.907292264 +0000 UTC m=+174.021390970 (delta=64.68071ms)
	I0819 13:20:35.021076 2153523 fix.go:200] guest clock delta is within tolerance: 64.68071ms
	I0819 13:20:35.021082 2153523 start.go:83] releasing machines lock for "embed-certs-823470", held for 19.316080353s
	I0819 13:20:35.021111 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.021414 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:35.024146 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024613 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.024651 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024793 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025291 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025478 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025561 2153523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:35.025612 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.025736 2153523 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:35.025759 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.028550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028973 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029003 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029031 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029062 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029272 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029359 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029551 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029597 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029723 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.029741 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029893 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.137494 2153523 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:35.145726 2153523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:35.296420 2153523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:35.303481 2153523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:35.303566 2153523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:35.323781 2153523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:35.323813 2153523 start.go:495] detecting cgroup driver to use...
	I0819 13:20:35.323893 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:35.343648 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:35.357864 2153523 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:35.357920 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:35.371911 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:35.388107 2153523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:35.496871 2153523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:35.635989 2153523 docker.go:233] disabling docker service ...
	I0819 13:20:35.636061 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:35.651531 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:35.665374 2153523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:35.810969 2153523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:35.045992 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Start
	I0819 13:20:35.046198 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring networks are active...
	I0819 13:20:35.046943 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network default is active
	I0819 13:20:35.047261 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network mk-no-preload-441627 is active
	I0819 13:20:35.047846 2151869 main.go:141] libmachine: (no-preload-441627) Getting domain xml...
	I0819 13:20:35.048719 2151869 main.go:141] libmachine: (no-preload-441627) Creating domain...
	I0819 13:20:36.329569 2151869 main.go:141] libmachine: (no-preload-441627) Waiting to get IP...
	I0819 13:20:36.330479 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.330862 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.330953 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.330842 2154491 retry.go:31] will retry after 214.344142ms: waiting for machine to come up
	I0819 13:20:35.931109 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:35.945746 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:35.963978 2153523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:35.964062 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.973860 2153523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:35.973944 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.983833 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.993733 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.003724 2153523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:36.014327 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.024944 2153523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.043960 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.054444 2153523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:36.064239 2153523 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:36.064325 2153523 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:36.082508 2153523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:36.092677 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:36.216592 2153523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:36.368516 2153523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:36.368599 2153523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:36.373579 2153523 start.go:563] Will wait 60s for crictl version
	I0819 13:20:36.373630 2153523 ssh_runner.go:195] Run: which crictl
	I0819 13:20:36.377433 2153523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:36.416842 2153523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:36.416933 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.445307 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.480522 2153523 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:33.845335 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.847273 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:36.135939 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.136635 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.635748 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.136679 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.636322 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.136560 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.635772 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.135797 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.635764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.481770 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:36.485210 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485665 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:36.485697 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485956 2153523 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:36.490383 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:36.504020 2153523 kubeadm.go:883] updating cluster {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:36.504151 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:36.504236 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:36.540784 2153523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:36.540871 2153523 ssh_runner.go:195] Run: which lz4
	I0819 13:20:36.545693 2153523 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:36.550278 2153523 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:36.550307 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:20:38.008380 2153523 crio.go:462] duration metric: took 1.462725002s to copy over tarball
	I0819 13:20:38.008468 2153523 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:40.267095 2153523 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258591648s)
	I0819 13:20:40.267126 2153523 crio.go:469] duration metric: took 2.258713555s to extract the tarball
	I0819 13:20:40.267133 2153523 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:40.307080 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:40.350343 2153523 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:40.350368 2153523 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:40.350377 2153523 kubeadm.go:934] updating node { 192.168.61.50 8443 v1.31.0 crio true true} ...
	I0819 13:20:40.350504 2153523 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-823470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:40.350594 2153523 ssh_runner.go:195] Run: crio config
	I0819 13:20:40.397534 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:40.397560 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:40.397582 2153523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:40.397612 2153523 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.50 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-823470 NodeName:embed-certs-823470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:40.397804 2153523 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-823470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:40.397880 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:40.409664 2153523 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:40.409738 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:40.420526 2153523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:20:40.437016 2153523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:40.454222 2153523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 13:20:40.471532 2153523 ssh_runner.go:195] Run: grep 192.168.61.50	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:40.475436 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:40.487570 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:40.612612 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:40.630016 2153523 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470 for IP: 192.168.61.50
	I0819 13:20:40.630049 2153523 certs.go:194] generating shared ca certs ...
	I0819 13:20:40.630073 2153523 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:40.630265 2153523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:40.630329 2153523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:40.630343 2153523 certs.go:256] generating profile certs ...
	I0819 13:20:40.630457 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/client.key
	I0819 13:20:40.630553 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key.208ee791
	I0819 13:20:40.630606 2153523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key
	I0819 13:20:40.630756 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:40.630812 2153523 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:40.630824 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:40.630859 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:40.630891 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:40.630915 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:40.630959 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:40.631651 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:40.660149 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:40.699120 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:40.726336 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:40.765233 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 13:20:40.804970 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 13:20:40.846737 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:40.871358 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:40.896688 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:40.920071 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:36.547335 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.547819 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.547855 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.547769 2154491 retry.go:31] will retry after 344.710336ms: waiting for machine to come up
	I0819 13:20:36.894655 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.895213 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.895249 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.895169 2154491 retry.go:31] will retry after 301.307077ms: waiting for machine to come up
	I0819 13:20:37.197664 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.198250 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.198286 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.198182 2154491 retry.go:31] will retry after 516.927721ms: waiting for machine to come up
	I0819 13:20:37.716831 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.717323 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.717359 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.717279 2154491 retry.go:31] will retry after 679.982986ms: waiting for machine to come up
	I0819 13:20:38.398678 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:38.399120 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:38.399157 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:38.399072 2154491 retry.go:31] will retry after 822.30179ms: waiting for machine to come up
	I0819 13:20:39.223045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:39.223575 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:39.223609 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:39.223519 2154491 retry.go:31] will retry after 936.040478ms: waiting for machine to come up
	I0819 13:20:40.161047 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:40.161593 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:40.161624 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:40.161538 2154491 retry.go:31] will retry after 1.301988289s: waiting for machine to come up
	I0819 13:20:38.347577 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:40.845179 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:42.845598 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:41.136326 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:41.635949 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.136274 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.636124 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.135998 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.636101 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.136242 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.635713 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.135796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.635702 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.943162 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:40.969648 2153523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:40.988367 2153523 ssh_runner.go:195] Run: openssl version
	I0819 13:20:40.994353 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:41.006057 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010833 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010908 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.016665 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:41.029047 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:41.040384 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044690 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044755 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.050358 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:41.061626 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:41.073480 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077867 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077941 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.083685 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:41.096051 2153523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:41.100589 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:41.106791 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:41.112757 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:41.119083 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:41.125060 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:41.130528 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:41.136615 2153523 kubeadm.go:392] StartCluster: {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:41.136781 2153523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:41.136848 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.184735 2153523 cri.go:89] found id: ""
	I0819 13:20:41.184820 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:41.195134 2153523 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:41.195162 2153523 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:41.195215 2153523 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:41.205105 2153523 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:41.206197 2153523 kubeconfig.go:125] found "embed-certs-823470" server: "https://192.168.61.50:8443"
	I0819 13:20:41.208434 2153523 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:41.218028 2153523 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.50
	I0819 13:20:41.218059 2153523 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:41.218073 2153523 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:41.218128 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.256254 2153523 cri.go:89] found id: ""
	I0819 13:20:41.256344 2153523 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:41.273223 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:41.283149 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:41.283169 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:41.283228 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:41.292541 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:41.292624 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:41.302453 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:41.312032 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:41.312090 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:41.322411 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.331738 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:41.331817 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.342148 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:41.353410 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:41.353506 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:41.365035 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:41.378556 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:41.502298 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.511293 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.008951961s)
	I0819 13:20:42.511332 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.728318 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.806475 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.889810 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:42.889917 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.390790 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.890775 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.939414 2153523 api_server.go:72] duration metric: took 1.049613567s to wait for apiserver process to appear ...
	I0819 13:20:43.939456 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:43.939495 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:43.940151 2153523 api_server.go:269] stopped: https://192.168.61.50:8443/healthz: Get "https://192.168.61.50:8443/healthz": dial tcp 192.168.61.50:8443: connect: connection refused
	I0819 13:20:44.439965 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:41.465077 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:41.465533 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:41.465577 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:41.465512 2154491 retry.go:31] will retry after 1.34269663s: waiting for machine to come up
	I0819 13:20:42.810485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:42.810988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:42.811017 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:42.810935 2154491 retry.go:31] will retry after 2.118455014s: waiting for machine to come up
	I0819 13:20:44.930721 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:44.931183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:44.931225 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:44.931151 2154491 retry.go:31] will retry after 2.224941461s: waiting for machine to come up
	I0819 13:20:44.846237 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:47.344303 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:46.957397 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.957427 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:46.957459 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:46.997895 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.997932 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:47.440580 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.444945 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.444984 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:47.939553 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.946543 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.946576 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.440206 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.446075 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.446114 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.939602 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.944480 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.944526 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.440053 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.444461 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.444502 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.940036 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.944425 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.944464 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:50.440006 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:50.445299 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:20:50.451900 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:50.451930 2153523 api_server.go:131] duration metric: took 6.512455813s to wait for apiserver health ...
	I0819 13:20:50.451941 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:50.451948 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:50.453835 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:46.135706 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:46.635815 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.136558 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.636410 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.136300 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.636143 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.136463 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.635682 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.135692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.635841 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.454977 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:50.466694 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:50.484966 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:50.507697 2153523 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:50.507738 2153523 system_pods.go:61] "coredns-6f6b679f8f-rxjvv" [7eae7e32-b4c3-4941-9241-16cdde1007fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:50.507750 2153523 system_pods.go:61] "etcd-embed-certs-823470" [e2684e0e-8c4a-40da-ae5b-aa7f3a077e45] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:50.507761 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [c438fbda-c1f4-49f8-8979-1194d91cdeab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:50.507770 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [528e4b3e-b3fd-4455-bb4f-79b4bbdcb434] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:50.507778 2153523 system_pods.go:61] "kube-proxy-jcnwd" [b5302230-969d-409e-8aba-98ff971e9e79] Running
	I0819 13:20:50.507787 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [213067c8-73c6-4eb8-b9dc-f4ea981421f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:50.507798 2153523 system_pods.go:61] "metrics-server-6867b74b74-8dwpf" [7e3ab377-075e-48d8-a9ce-191ba41e3ed7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:50.507812 2153523 system_pods.go:61] "storage-provisioner" [4dd080d9-174f-4c0c-9b06-39e98a71672b] Running
	I0819 13:20:50.507821 2153523 system_pods.go:74] duration metric: took 22.833545ms to wait for pod list to return data ...
	I0819 13:20:50.507831 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:50.512822 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:50.512850 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:50.512862 2153523 node_conditions.go:105] duration metric: took 5.023718ms to run NodePressure ...
	I0819 13:20:50.512877 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:50.853595 2153523 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859172 2153523 kubeadm.go:739] kubelet initialised
	I0819 13:20:50.859200 2153523 kubeadm.go:740] duration metric: took 5.576224ms waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859211 2153523 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:50.865571 2153523 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.871631 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871652 2153523 pod_ready.go:82] duration metric: took 6.059239ms for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.871660 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871666 2153523 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.875242 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875268 2153523 pod_ready.go:82] duration metric: took 3.594665ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.875279 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875287 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.879429 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879460 2153523 pod_ready.go:82] duration metric: took 4.156713ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.879471 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879485 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.887625 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887647 2153523 pod_ready.go:82] duration metric: took 8.149859ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.887655 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887662 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:47.157196 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:47.157589 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:47.157616 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:47.157532 2154491 retry.go:31] will retry after 3.412547504s: waiting for machine to come up
	I0819 13:20:50.571896 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:50.572460 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:50.572485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:50.572401 2154491 retry.go:31] will retry after 3.498419112s: waiting for machine to come up
	I0819 13:20:49.345181 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.346188 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.136307 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.136425 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.636034 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.136443 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.636129 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.135695 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.635860 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.136481 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.635655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.287880 2153523 pod_ready.go:93] pod "kube-proxy-jcnwd" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:51.287908 2153523 pod_ready.go:82] duration metric: took 400.238614ms for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:51.287922 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:53.294631 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.803650 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:54.073302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073795 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has current primary IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073819 2151869 main.go:141] libmachine: (no-preload-441627) Found IP for machine: 192.168.50.161
	I0819 13:20:54.073833 2151869 main.go:141] libmachine: (no-preload-441627) Reserving static IP address...
	I0819 13:20:54.074302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.074337 2151869 main.go:141] libmachine: (no-preload-441627) Reserved static IP address: 192.168.50.161
	I0819 13:20:54.074356 2151869 main.go:141] libmachine: (no-preload-441627) DBG | skip adding static IP to network mk-no-preload-441627 - found existing host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"}
	I0819 13:20:54.074382 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Getting to WaitForSSH function...
	I0819 13:20:54.074398 2151869 main.go:141] libmachine: (no-preload-441627) Waiting for SSH to be available...
	I0819 13:20:54.076312 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076686 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.076719 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076879 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH client type: external
	I0819 13:20:54.076920 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa (-rw-------)
	I0819 13:20:54.076961 2151869 main.go:141] libmachine: (no-preload-441627) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:54.076975 2151869 main.go:141] libmachine: (no-preload-441627) DBG | About to run SSH command:
	I0819 13:20:54.076988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | exit 0
	I0819 13:20:54.208334 2151869 main.go:141] libmachine: (no-preload-441627) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:54.208710 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetConfigRaw
	I0819 13:20:54.209373 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.211827 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212176 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.212210 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212409 2151869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/config.json ...
	I0819 13:20:54.212587 2151869 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:54.212607 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:54.212810 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.214965 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215290 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.215315 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215435 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.215615 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215735 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215887 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.216050 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.216244 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.216256 2151869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:54.332379 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:54.332411 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332718 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:20:54.332753 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332948 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.335520 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.335930 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.335959 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.336119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.336357 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336513 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336654 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.336918 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.337084 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.337104 2151869 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-441627 && echo "no-preload-441627" | sudo tee /etc/hostname
	I0819 13:20:54.466127 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-441627
	
	I0819 13:20:54.466178 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.469045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469434 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.469482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469648 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.469836 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470018 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470174 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.470406 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.470654 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.470676 2151869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-441627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-441627/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-441627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:54.595063 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:54.595111 2151869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:54.595162 2151869 buildroot.go:174] setting up certificates
	I0819 13:20:54.595178 2151869 provision.go:84] configureAuth start
	I0819 13:20:54.595191 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.595554 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.598703 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599031 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.599060 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599202 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.601785 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602203 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.602231 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602593 2151869 provision.go:143] copyHostCerts
	I0819 13:20:54.602661 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:54.602680 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:54.602745 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:54.602878 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:54.602892 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:54.602922 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:54.603010 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:54.603022 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:54.603043 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:54.603116 2151869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.no-preload-441627 san=[127.0.0.1 192.168.50.161 localhost minikube no-preload-441627]
	I0819 13:20:55.027338 2151869 provision.go:177] copyRemoteCerts
	I0819 13:20:55.027408 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:55.027438 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.030073 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.030550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030826 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.031038 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.031231 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.031436 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.122187 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:55.147222 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:55.170341 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:55.195484 2151869 provision.go:87] duration metric: took 600.289644ms to configureAuth
	I0819 13:20:55.195516 2151869 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:55.195757 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:55.195862 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.198859 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199267 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.199294 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199448 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.199666 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199834 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199941 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.200126 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.200340 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.200357 2151869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:55.485890 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:55.485920 2151869 machine.go:96] duration metric: took 1.273319956s to provisionDockerMachine
	I0819 13:20:55.485931 2151869 start.go:293] postStartSetup for "no-preload-441627" (driver="kvm2")
	I0819 13:20:55.485941 2151869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:55.485957 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.486340 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:55.486379 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.489099 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489514 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.489550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489657 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.489866 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.490073 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.490298 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.582461 2151869 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:55.587827 2151869 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:55.587857 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:55.587928 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:55.588015 2151869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:55.588115 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:55.600309 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:55.629229 2151869 start.go:296] duration metric: took 143.280186ms for postStartSetup
	I0819 13:20:55.629281 2151869 fix.go:56] duration metric: took 20.608007215s for fixHost
	I0819 13:20:55.629306 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.632299 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632733 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.632786 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632921 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.633155 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633422 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633640 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.633850 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.634098 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.634117 2151869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:55.752998 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073655.727816487
	
	I0819 13:20:55.753028 2151869 fix.go:216] guest clock: 1724073655.727816487
	I0819 13:20:55.753039 2151869 fix.go:229] Guest: 2024-08-19 13:20:55.727816487 +0000 UTC Remote: 2024-08-19 13:20:55.629285878 +0000 UTC m=+354.230914724 (delta=98.530609ms)
	I0819 13:20:55.753080 2151869 fix.go:200] guest clock delta is within tolerance: 98.530609ms
	I0819 13:20:55.753087 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 20.731851858s
	I0819 13:20:55.753119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.753381 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:55.756443 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.756954 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.756986 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.757124 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757639 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757822 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757927 2151869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:55.757999 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.758066 2151869 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:55.758098 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.760779 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761023 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761162 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761338 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761459 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761515 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761756 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.761845 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761895 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.762041 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.762192 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.846241 2151869 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:55.868048 2151869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:56.013592 2151869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:56.020322 2151869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:56.020403 2151869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:56.045197 2151869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:56.045234 2151869 start.go:495] detecting cgroup driver to use...
	I0819 13:20:56.045325 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:56.062336 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:56.078399 2151869 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:56.078476 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:56.094735 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:56.110097 2151869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:56.238433 2151869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:56.405540 2151869 docker.go:233] disabling docker service ...
	I0819 13:20:56.405638 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:56.421604 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:56.434166 2151869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:56.565116 2151869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:56.688471 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:56.709293 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:56.727575 2151869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:56.727658 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.738492 2151869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:56.738559 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.748563 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.758372 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.768341 2151869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:56.778351 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.788072 2151869 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.805215 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.815496 2151869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:56.825033 2151869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:56.825087 2151869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:56.839732 2151869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:56.851166 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:56.976815 2151869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:57.154020 2151869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:57.154114 2151869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:57.159214 2151869 start.go:563] Will wait 60s for crictl version
	I0819 13:20:57.159282 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.163467 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:57.209305 2151869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:57.209417 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.239241 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.269905 2151869 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:53.844042 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.845906 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:56.135714 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:56.635996 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.135847 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.635970 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.136650 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.636316 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.136506 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.636057 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.136545 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.636030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.297660 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:57.297685 2153523 pod_ready.go:82] duration metric: took 6.009754619s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:57.297696 2153523 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:59.304626 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:57.270865 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:57.273851 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274158 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:57.274186 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274425 2151869 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:57.278843 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:57.296818 2151869 kubeadm.go:883] updating cluster {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:57.296988 2151869 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:57.297042 2151869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:57.345101 2151869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:57.345130 2151869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:57.345185 2151869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.345236 2151869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.345463 2151869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.345553 2151869 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.345586 2151869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.345482 2151869 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 13:20:57.345492 2151869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.345496 2151869 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.347612 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.347625 2151869 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.347661 2151869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.347698 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.347797 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.348298 2151869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.348418 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.348485 2151869 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 13:20:57.521291 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.528953 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.530354 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.538749 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.539434 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.564916 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 13:20:57.572522 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.596418 2151869 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 13:20:57.596483 2151869 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.596536 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.632436 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 13:20:57.690627 2151869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.690666 2151869 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 13:20:57.690725 2151869 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.690677 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690776 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 13:20:57.690827 2151869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.690856 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.711488 2151869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 13:20:57.711546 2151869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.711597 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842702 2151869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 13:20:57.842794 2151869 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 13:20:57.842819 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.842815 2151869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.842845 2151869 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.842917 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842923 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842933 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.842769 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.863031 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.959879 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.959965 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.960047 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.960061 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.960110 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.960298 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.994432 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.109438 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.124785 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:58.124845 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:58.126704 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:58.126791 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:58.126801 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:58.184360 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.198177 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.280314 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 13:20:58.280424 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.280471 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 13:20:58.280579 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:20:58.291181 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 13:20:58.291246 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 13:20:58.291278 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:20:58.291295 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 13:20:58.291333 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:20:58.291370 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:20:58.329433 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 13:20:58.329497 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 13:20:58.329553 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:20:58.329588 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 13:20:58.329597 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:20:58.329602 2151869 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329640 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329660 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 13:20:58.329679 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 13:20:58.329704 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 13:20:58.329747 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 13:20:58.333814 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 13:21:00.287189 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.957515939s)
	I0819 13:21:00.287247 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 13:21:00.287270 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287208 2151869 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.957596318s)
	I0819 13:21:00.287323 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287327 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 13:20:58.346708 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:00.844688 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.845156 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:01.136475 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.136128 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.636458 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.136471 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.636080 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.135958 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.636494 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.136279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.636642 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.803995 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:03.804110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:05.805252 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.037910 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.750558372s)
	I0819 13:21:02.037948 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 13:21:02.037977 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:02.038031 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:04.606941 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.568877079s)
	I0819 13:21:04.606974 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 13:21:04.607003 2151869 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:04.607057 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:05.346854 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.108643 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:06.136247 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:06.636648 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.136212 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.636710 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.136112 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.636655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.136349 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.635778 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.136689 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.635931 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.305329 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:10.805529 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.376612 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.769523227s)
	I0819 13:21:08.376661 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 13:21:08.376690 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:08.376749 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:10.337756 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.960979615s)
	I0819 13:21:10.337805 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 13:21:10.337835 2151869 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.337888 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.982386 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 13:21:10.982434 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.982483 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.345178 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.345444 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:11.135937 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:11.636273 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.136428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.635907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.135893 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.636189 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.136600 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.636483 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.135856 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.635957 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.303597 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:15.804612 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.334483 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351971561s)
	I0819 13:21:12.334524 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 13:21:12.334561 2151869 cache_images.go:123] Successfully loaded all cached images
	I0819 13:21:12.334568 2151869 cache_images.go:92] duration metric: took 14.989422621s to LoadCachedImages
	I0819 13:21:12.334580 2151869 kubeadm.go:934] updating node { 192.168.50.161 8443 v1.31.0 crio true true} ...
	I0819 13:21:12.334746 2151869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-441627 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:21:12.334860 2151869 ssh_runner.go:195] Run: crio config
	I0819 13:21:12.396639 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:12.396662 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:12.396674 2151869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:21:12.396703 2151869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.161 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-441627 NodeName:no-preload-441627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:21:12.396877 2151869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-441627"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:21:12.396958 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:21:12.408527 2151869 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:21:12.408601 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:21:12.418596 2151869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:21:12.435620 2151869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:21:12.452829 2151869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 13:21:12.470756 2151869 ssh_runner.go:195] Run: grep 192.168.50.161	control-plane.minikube.internal$ /etc/hosts
	I0819 13:21:12.474802 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:21:12.486795 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:21:12.612626 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:21:12.639005 2151869 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627 for IP: 192.168.50.161
	I0819 13:21:12.639027 2151869 certs.go:194] generating shared ca certs ...
	I0819 13:21:12.639051 2151869 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:21:12.639231 2151869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:21:12.639296 2151869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:21:12.639310 2151869 certs.go:256] generating profile certs ...
	I0819 13:21:12.639441 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/client.key
	I0819 13:21:12.639535 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key.72b03406
	I0819 13:21:12.639591 2151869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key
	I0819 13:21:12.639752 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:21:12.639811 2151869 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:21:12.639822 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:21:12.639849 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:21:12.639874 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:21:12.639899 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:21:12.639944 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:21:12.640741 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:21:12.674749 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:21:12.700466 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:21:12.726553 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:21:12.753171 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 13:21:12.779864 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:21:12.817715 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:21:12.841844 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:21:12.867122 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:21:12.896767 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:21:12.921671 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:21:12.948478 2151869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:21:12.967603 2151869 ssh_runner.go:195] Run: openssl version
	I0819 13:21:12.973848 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:21:12.984584 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989193 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989248 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.995006 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:21:13.005510 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:21:13.017919 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022858 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022918 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.029040 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:21:13.039654 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:21:13.049968 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054446 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054502 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.060941 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:21:13.071954 2151869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:21:13.076513 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:21:13.082624 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:21:13.088523 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:21:13.094597 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:21:13.100589 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:21:13.106674 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:21:13.112637 2151869 kubeadm.go:392] StartCluster: {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:21:13.112759 2151869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:21:13.112804 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.153143 2151869 cri.go:89] found id: ""
	I0819 13:21:13.153230 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:21:13.164365 2151869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:21:13.164389 2151869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:21:13.164444 2151869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:21:13.174609 2151869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:21:13.175874 2151869 kubeconfig.go:125] found "no-preload-441627" server: "https://192.168.50.161:8443"
	I0819 13:21:13.178228 2151869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:21:13.188495 2151869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.161
	I0819 13:21:13.188536 2151869 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:21:13.188562 2151869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:21:13.188623 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.225547 2151869 cri.go:89] found id: ""
	I0819 13:21:13.225635 2151869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:21:13.242484 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:21:13.251983 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:21:13.252021 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:21:13.252085 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:21:13.260912 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:21:13.260984 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:21:13.270535 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:21:13.279500 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:21:13.279549 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:21:13.288906 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.297861 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:21:13.297914 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.307468 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:21:13.316801 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:21:13.316859 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:21:13.325756 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:21:13.336403 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:13.448374 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.613650 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.165234261s)
	I0819 13:21:14.613681 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.841371 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.904812 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:15.017425 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:21:15.017541 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.518613 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.018036 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.061238 2151869 api_server.go:72] duration metric: took 1.043828546s to wait for apiserver process to appear ...
	I0819 13:21:16.061271 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:21:16.061301 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:14.846330 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:17.344355 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:19.219563 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:21:19.219607 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:21:19.219627 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.350613 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.350647 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:19.562010 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.567205 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.567236 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.062225 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.085272 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:20.085311 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.561434 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.565983 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:21:20.574754 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:21:20.574779 2151869 api_server.go:131] duration metric: took 4.513500352s to wait for apiserver health ...
	I0819 13:21:20.574790 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:20.574797 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:20.576255 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:21:16.136030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.636437 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.135718 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.635876 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.136668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.636021 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.135933 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.636638 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.135892 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.636450 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.804980 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.306777 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.577369 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:21:20.603169 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:21:20.642872 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:21:20.657328 2151869 system_pods.go:59] 8 kube-system pods found
	I0819 13:21:20.657366 2151869 system_pods.go:61] "coredns-6f6b679f8f-4w764" [486b4748-9a70-47b1-b386-e5369ebc2a96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:21:20.657373 2151869 system_pods.go:61] "etcd-no-preload-441627" [8013da47-1ecb-47de-a82c-7b72529b68fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:21:20.657383 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [72190b64-c312-48c0-8494-27e1ad39d877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:21:20.657389 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [60773a38-952f-4f53-a3f3-e54acdf7d196] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:21:20.657395 2151869 system_pods.go:61] "kube-proxy-v76kp" [ac6ad27b-5ee4-4899-8744-82537285a195] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:21:20.657400 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [78dde452-d8d5-498d-a6b1-459e6cb9ac16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:21:20.657405 2151869 system_pods.go:61] "metrics-server-6867b74b74-hd777" [2ce49c7d-af76-459a-82f5-aaa8d2b97c60] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:21:20.657410 2151869 system_pods.go:61] "storage-provisioner" [b67a7c83-46d3-4d75-bfa2-ae944b0c0ab8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:21:20.657417 2151869 system_pods.go:74] duration metric: took 14.518946ms to wait for pod list to return data ...
	I0819 13:21:20.657424 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:21:20.661337 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:21:20.661362 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:21:20.661373 2151869 node_conditions.go:105] duration metric: took 3.944656ms to run NodePressure ...
	I0819 13:21:20.661435 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:20.951411 2151869 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955734 2151869 kubeadm.go:739] kubelet initialised
	I0819 13:21:20.955755 2151869 kubeadm.go:740] duration metric: took 4.317107ms waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955763 2151869 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:21:20.961038 2151869 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.965450 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965473 2151869 pod_ready.go:82] duration metric: took 4.406672ms for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.965481 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965488 2151869 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.969586 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969610 2151869 pod_ready.go:82] duration metric: took 4.112423ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.969620 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969626 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.973542 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973566 2151869 pod_ready.go:82] duration metric: took 3.932252ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.973575 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973583 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.046350 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046390 2151869 pod_ready.go:82] duration metric: took 72.796575ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:21.046405 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046414 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:19.346285 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.346824 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.136149 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:21.635766 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.136148 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.135942 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.636570 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.135982 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.635967 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.136750 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.635977 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:25.636070 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:25.675163 2152780 cri.go:89] found id: ""
	I0819 13:21:25.675196 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.675208 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:25.675216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:25.675294 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:25.711676 2152780 cri.go:89] found id: ""
	I0819 13:21:25.711702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.711711 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:25.711717 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:25.711777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:25.748448 2152780 cri.go:89] found id: ""
	I0819 13:21:25.748479 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.748490 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:25.748499 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:25.748574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:25.786555 2152780 cri.go:89] found id: ""
	I0819 13:21:25.786584 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.786597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:25.786606 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:25.786661 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:22.804004 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:24.804708 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.447200 2151869 pod_ready.go:93] pod "kube-proxy-v76kp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:21.447230 2151869 pod_ready.go:82] duration metric: took 400.803851ms for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.447241 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:23.452136 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.454487 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:23.845593 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:26.345971 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.819620 2152780 cri.go:89] found id: ""
	I0819 13:21:25.819654 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.819665 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:25.819672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:25.819734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:25.853303 2152780 cri.go:89] found id: ""
	I0819 13:21:25.853333 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.853342 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:25.853349 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:25.853401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:25.886285 2152780 cri.go:89] found id: ""
	I0819 13:21:25.886315 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.886325 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:25.886332 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:25.886389 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:25.921320 2152780 cri.go:89] found id: ""
	I0819 13:21:25.921347 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.921356 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:25.921367 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:25.921377 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:25.975400 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:25.975486 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:25.989715 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:25.989741 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:26.115378 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:26.115402 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:26.115415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:26.192370 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:26.192415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:28.739640 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:28.752652 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:28.752729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:28.791883 2152780 cri.go:89] found id: ""
	I0819 13:21:28.791910 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.791918 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:28.791925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:28.791977 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:28.829635 2152780 cri.go:89] found id: ""
	I0819 13:21:28.829668 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.829678 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:28.829686 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:28.829756 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:28.866768 2152780 cri.go:89] found id: ""
	I0819 13:21:28.866804 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.866817 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:28.866825 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:28.866897 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:28.901921 2152780 cri.go:89] found id: ""
	I0819 13:21:28.901954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.901963 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:28.901970 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:28.902025 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:28.945421 2152780 cri.go:89] found id: ""
	I0819 13:21:28.945449 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.945458 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:28.945468 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:28.945535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:28.982857 2152780 cri.go:89] found id: ""
	I0819 13:21:28.982893 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.982905 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:28.982918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:28.982978 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:29.020378 2152780 cri.go:89] found id: ""
	I0819 13:21:29.020415 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.020427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:29.020436 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:29.020502 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:29.055782 2152780 cri.go:89] found id: ""
	I0819 13:21:29.055817 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.055829 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:29.055841 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:29.055864 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:29.107722 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:29.107761 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:29.122304 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:29.122364 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:29.193379 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:29.193415 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:29.193435 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:29.271539 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:29.271578 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:27.304409 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.306462 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:27.952426 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.954179 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:28.346305 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:30.845538 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:31.819829 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:31.832354 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:31.832427 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:31.873620 2152780 cri.go:89] found id: ""
	I0819 13:21:31.873647 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.873656 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:31.873662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:31.873722 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:31.905857 2152780 cri.go:89] found id: ""
	I0819 13:21:31.905891 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.905903 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:31.905911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:31.905993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:31.942187 2152780 cri.go:89] found id: ""
	I0819 13:21:31.942215 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.942227 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:31.942235 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:31.942307 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:31.981439 2152780 cri.go:89] found id: ""
	I0819 13:21:31.981475 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.981486 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:31.981494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:31.981569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:32.019863 2152780 cri.go:89] found id: ""
	I0819 13:21:32.019900 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.019922 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:32.019934 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:32.020000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:32.055007 2152780 cri.go:89] found id: ""
	I0819 13:21:32.055035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.055044 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:32.055050 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:32.055117 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:32.092770 2152780 cri.go:89] found id: ""
	I0819 13:21:32.092809 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.092818 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:32.092824 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:32.092880 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:32.127000 2152780 cri.go:89] found id: ""
	I0819 13:21:32.127035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.127043 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:32.127053 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:32.127067 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:32.179979 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:32.180026 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:32.195377 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:32.195412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:32.269864 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:32.269887 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:32.269902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:32.348035 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:32.348072 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:34.886647 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:34.900410 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:34.900483 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:34.938558 2152780 cri.go:89] found id: ""
	I0819 13:21:34.938588 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.938596 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:34.938603 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:34.938660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:34.975968 2152780 cri.go:89] found id: ""
	I0819 13:21:34.976001 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.976012 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:34.976020 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:34.976105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:35.008967 2152780 cri.go:89] found id: ""
	I0819 13:21:35.008995 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.009003 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:35.009010 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:35.009079 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:35.045918 2152780 cri.go:89] found id: ""
	I0819 13:21:35.045957 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.045966 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:35.045973 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:35.046038 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:35.090605 2152780 cri.go:89] found id: ""
	I0819 13:21:35.090640 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.090652 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:35.090667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:35.090740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:35.152032 2152780 cri.go:89] found id: ""
	I0819 13:21:35.152066 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.152079 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:35.152088 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:35.152182 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:35.197900 2152780 cri.go:89] found id: ""
	I0819 13:21:35.197932 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.197947 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:35.197956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:35.198026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:35.233818 2152780 cri.go:89] found id: ""
	I0819 13:21:35.233853 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.233867 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:35.233884 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:35.233902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:35.316457 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:35.316501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:35.356449 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:35.356480 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:35.409307 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:35.409347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:35.423776 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:35.423807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:35.500868 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:31.803587 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:33.803736 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.805282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:32.452623 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:34.454347 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.454772 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:35.454809 2151869 pod_ready.go:82] duration metric: took 14.007559429s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:35.454826 2151869 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:33.345607 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.347672 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.844185 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:38.001981 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:38.014908 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:38.014991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:38.051110 2152780 cri.go:89] found id: ""
	I0819 13:21:38.051141 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.051154 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:38.051162 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:38.051236 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:38.087476 2152780 cri.go:89] found id: ""
	I0819 13:21:38.087502 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.087511 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:38.087517 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:38.087575 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:38.122929 2152780 cri.go:89] found id: ""
	I0819 13:21:38.122961 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.122972 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:38.122980 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:38.123050 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:38.157352 2152780 cri.go:89] found id: ""
	I0819 13:21:38.157382 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.157394 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:38.157403 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:38.157468 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:38.194914 2152780 cri.go:89] found id: ""
	I0819 13:21:38.194953 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.194968 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:38.194974 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:38.195028 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:38.234036 2152780 cri.go:89] found id: ""
	I0819 13:21:38.234069 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.234078 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:38.234087 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:38.234155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:38.275759 2152780 cri.go:89] found id: ""
	I0819 13:21:38.275787 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.275798 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:38.275806 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:38.275867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:38.312718 2152780 cri.go:89] found id: ""
	I0819 13:21:38.312749 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.312759 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:38.312770 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:38.312785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:38.352032 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:38.352057 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:38.401877 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:38.401912 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:38.415899 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:38.415929 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:38.495576 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:38.495596 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:38.495609 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:38.304885 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.804828 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.461711 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:39.960883 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.343811 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.344371 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:41.073796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:41.095862 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:41.095935 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:41.155760 2152780 cri.go:89] found id: ""
	I0819 13:21:41.155790 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.155820 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:41.155828 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:41.155899 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:41.200108 2152780 cri.go:89] found id: ""
	I0819 13:21:41.200147 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.200177 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:41.200189 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:41.200263 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:41.233462 2152780 cri.go:89] found id: ""
	I0819 13:21:41.233494 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.233506 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:41.233514 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:41.233585 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:41.267089 2152780 cri.go:89] found id: ""
	I0819 13:21:41.267120 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.267133 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:41.267141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:41.267212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:41.302391 2152780 cri.go:89] found id: ""
	I0819 13:21:41.302423 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.302434 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:41.302443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:41.302509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:41.337812 2152780 cri.go:89] found id: ""
	I0819 13:21:41.337843 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.337855 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:41.337864 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:41.337939 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:41.373924 2152780 cri.go:89] found id: ""
	I0819 13:21:41.373954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.373961 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:41.373968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:41.374023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:41.407905 2152780 cri.go:89] found id: ""
	I0819 13:21:41.407950 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.407961 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:41.407974 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:41.407996 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:41.493650 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:41.493693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:41.532411 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:41.532456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:41.584613 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:41.584650 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:41.598337 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:41.598369 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:41.672154 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.172662 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:44.185663 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:44.185730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:44.220825 2152780 cri.go:89] found id: ""
	I0819 13:21:44.220863 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.220876 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:44.220885 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:44.220955 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:44.257261 2152780 cri.go:89] found id: ""
	I0819 13:21:44.257292 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.257300 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:44.257307 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:44.257369 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:44.292652 2152780 cri.go:89] found id: ""
	I0819 13:21:44.292690 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.292701 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:44.292709 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:44.292806 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:44.330064 2152780 cri.go:89] found id: ""
	I0819 13:21:44.330096 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.330108 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:44.330117 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:44.330184 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:44.366863 2152780 cri.go:89] found id: ""
	I0819 13:21:44.366895 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.366906 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:44.366915 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:44.366976 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:44.400335 2152780 cri.go:89] found id: ""
	I0819 13:21:44.400372 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.400385 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:44.400394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:44.400458 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:44.434381 2152780 cri.go:89] found id: ""
	I0819 13:21:44.434411 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.434422 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:44.434430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:44.434498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:44.470985 2152780 cri.go:89] found id: ""
	I0819 13:21:44.471013 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.471024 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:44.471036 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:44.471048 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:44.521552 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:44.521593 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:44.536197 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:44.536230 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:44.613040 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.613063 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:44.613077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:44.694600 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:44.694655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:42.807847 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:45.304916 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.461974 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.464590 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.843926 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.845069 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:47.231770 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:47.245480 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:47.245555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:47.282302 2152780 cri.go:89] found id: ""
	I0819 13:21:47.282334 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.282343 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:47.282351 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:47.282424 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:47.317791 2152780 cri.go:89] found id: ""
	I0819 13:21:47.317828 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.317839 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:47.317847 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:47.317914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:47.355999 2152780 cri.go:89] found id: ""
	I0819 13:21:47.356033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.356044 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:47.356052 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:47.356113 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:47.394652 2152780 cri.go:89] found id: ""
	I0819 13:21:47.394689 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.394702 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:47.394711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:47.394769 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:47.428563 2152780 cri.go:89] found id: ""
	I0819 13:21:47.428599 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.428612 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:47.428621 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:47.428698 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:47.462669 2152780 cri.go:89] found id: ""
	I0819 13:21:47.462702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.462712 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:47.462719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:47.462789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:47.496322 2152780 cri.go:89] found id: ""
	I0819 13:21:47.496349 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.496357 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:47.496366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:47.496419 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:47.527997 2152780 cri.go:89] found id: ""
	I0819 13:21:47.528033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.528045 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:47.528058 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:47.528077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:47.541514 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:47.541542 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:47.612804 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:47.612833 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:47.612851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.692815 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:47.692857 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:47.731057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:47.731092 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.286289 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:50.299376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:50.299457 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:50.335093 2152780 cri.go:89] found id: ""
	I0819 13:21:50.335123 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.335132 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:50.335138 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:50.335212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:50.369353 2152780 cri.go:89] found id: ""
	I0819 13:21:50.369378 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.369386 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:50.369394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:50.369459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:50.404319 2152780 cri.go:89] found id: ""
	I0819 13:21:50.404350 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.404360 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:50.404366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:50.404421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:50.441033 2152780 cri.go:89] found id: ""
	I0819 13:21:50.441064 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.441073 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:50.441080 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:50.441136 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:50.477438 2152780 cri.go:89] found id: ""
	I0819 13:21:50.477463 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.477471 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:50.477485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:50.477541 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:50.511785 2152780 cri.go:89] found id: ""
	I0819 13:21:50.511825 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.511838 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:50.511848 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:50.511925 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:50.545454 2152780 cri.go:89] found id: ""
	I0819 13:21:50.545487 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.545496 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:50.545502 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:50.545563 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:50.582022 2152780 cri.go:89] found id: ""
	I0819 13:21:50.582053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.582061 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:50.582071 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:50.582086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:50.625353 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:50.625384 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.688814 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:50.688851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:50.704225 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:50.704254 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:50.782736 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:50.782771 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:50.782789 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.804084 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.804223 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.961189 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:48.962466 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.344808 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.347615 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.365075 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:53.380049 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:53.380135 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:53.418911 2152780 cri.go:89] found id: ""
	I0819 13:21:53.418944 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.418956 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:53.418964 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:53.419020 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:53.458952 2152780 cri.go:89] found id: ""
	I0819 13:21:53.458986 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.458997 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:53.459005 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:53.459080 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:53.497219 2152780 cri.go:89] found id: ""
	I0819 13:21:53.497254 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.497265 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:53.497273 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:53.497360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:53.533624 2152780 cri.go:89] found id: ""
	I0819 13:21:53.533658 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.533671 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:53.533682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:53.533761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:53.567453 2152780 cri.go:89] found id: ""
	I0819 13:21:53.567490 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.567501 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:53.567509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:53.567574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:53.603438 2152780 cri.go:89] found id: ""
	I0819 13:21:53.603472 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.603482 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:53.603490 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:53.603570 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:53.637608 2152780 cri.go:89] found id: ""
	I0819 13:21:53.637648 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.637659 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:53.637667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:53.637742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:53.684476 2152780 cri.go:89] found id: ""
	I0819 13:21:53.684514 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.684524 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:53.684534 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:53.684550 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:53.699195 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:53.699227 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:53.766980 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:53.767000 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:53.767013 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:53.844890 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:53.844924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:53.888284 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:53.888318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:51.805011 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.805710 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.460867 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.462350 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.962100 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.845091 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.845602 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:57.848944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:56.442668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:56.455936 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:56.456019 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:56.494363 2152780 cri.go:89] found id: ""
	I0819 13:21:56.494403 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.494414 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:56.494423 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:56.494498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:56.531901 2152780 cri.go:89] found id: ""
	I0819 13:21:56.531933 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.531943 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:56.531950 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:56.532046 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:56.569677 2152780 cri.go:89] found id: ""
	I0819 13:21:56.569706 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.569715 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:56.569721 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:56.569787 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:56.608109 2152780 cri.go:89] found id: ""
	I0819 13:21:56.608136 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.608147 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:56.608153 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:56.608240 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:56.646753 2152780 cri.go:89] found id: ""
	I0819 13:21:56.646791 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.646803 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:56.646810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:56.646879 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:56.686029 2152780 cri.go:89] found id: ""
	I0819 13:21:56.686053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.686061 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:56.686067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:56.686131 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:56.724838 2152780 cri.go:89] found id: ""
	I0819 13:21:56.724874 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.724886 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:56.724895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:56.724963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:56.762368 2152780 cri.go:89] found id: ""
	I0819 13:21:56.762399 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.762407 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:56.762422 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:56.762434 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:56.815047 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:56.815103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:56.830783 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:56.830809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:56.908963 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:56.908990 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:56.909008 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:56.991337 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:56.991372 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:59.532005 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:59.545733 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:59.545823 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:59.584082 2152780 cri.go:89] found id: ""
	I0819 13:21:59.584111 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.584121 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:59.584130 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:59.584216 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:59.618041 2152780 cri.go:89] found id: ""
	I0819 13:21:59.618075 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.618086 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:59.618094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:59.618171 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:59.651720 2152780 cri.go:89] found id: ""
	I0819 13:21:59.651753 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.651764 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:59.651772 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:59.651844 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:59.686447 2152780 cri.go:89] found id: ""
	I0819 13:21:59.686471 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.686479 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:59.686486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:59.686539 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:59.719583 2152780 cri.go:89] found id: ""
	I0819 13:21:59.719623 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.719635 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:59.719643 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:59.719714 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:59.753559 2152780 cri.go:89] found id: ""
	I0819 13:21:59.753593 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.753604 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:59.753613 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:59.753684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:59.789518 2152780 cri.go:89] found id: ""
	I0819 13:21:59.789554 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.789566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:59.789575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:59.789647 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:59.825772 2152780 cri.go:89] found id: ""
	I0819 13:21:59.825801 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.825812 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:59.825823 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:59.825837 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:59.878857 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:59.878899 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:59.894113 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:59.894146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:59.975274 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:59.975301 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:59.975318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:00.056121 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:00.056185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:56.304106 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.804232 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.805308 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.461819 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.961828 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.345176 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.345756 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.596366 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:02.610365 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:02.610439 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:02.646703 2152780 cri.go:89] found id: ""
	I0819 13:22:02.646729 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.646737 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:02.646743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:02.646800 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:02.682311 2152780 cri.go:89] found id: ""
	I0819 13:22:02.682345 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.682357 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:02.682364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:02.682441 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:02.718494 2152780 cri.go:89] found id: ""
	I0819 13:22:02.718525 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.718545 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:02.718551 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:02.718608 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.751952 2152780 cri.go:89] found id: ""
	I0819 13:22:02.751988 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.751999 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:02.752008 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:02.752074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:02.786156 2152780 cri.go:89] found id: ""
	I0819 13:22:02.786189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.786197 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:02.786204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:02.786261 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:02.823080 2152780 cri.go:89] found id: ""
	I0819 13:22:02.823114 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.823126 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:02.823134 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:02.823207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:02.860266 2152780 cri.go:89] found id: ""
	I0819 13:22:02.860299 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.860311 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:02.860320 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:02.860377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:02.903612 2152780 cri.go:89] found id: ""
	I0819 13:22:02.903643 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.903652 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:02.903667 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:02.903680 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:02.984771 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:02.984813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:03.027346 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:03.027378 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:03.077562 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:03.077608 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:03.092135 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:03.092190 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:03.168418 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:05.669468 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:05.682640 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:05.682706 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:05.717755 2152780 cri.go:89] found id: ""
	I0819 13:22:05.717785 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.717796 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:05.717805 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:05.717867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:05.757165 2152780 cri.go:89] found id: ""
	I0819 13:22:05.757191 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.757199 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:05.757204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:05.757265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:05.794991 2152780 cri.go:89] found id: ""
	I0819 13:22:05.795023 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.795035 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:05.795044 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:05.795124 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.805919 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.304915 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.963297 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.459786 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:04.844594 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.344680 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.829634 2152780 cri.go:89] found id: ""
	I0819 13:22:05.829664 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.829673 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:05.829680 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:05.829734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:05.865739 2152780 cri.go:89] found id: ""
	I0819 13:22:05.865767 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.865775 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:05.865781 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:05.865839 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:05.901395 2152780 cri.go:89] found id: ""
	I0819 13:22:05.901425 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.901435 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:05.901443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:05.901641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:05.937162 2152780 cri.go:89] found id: ""
	I0819 13:22:05.937190 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.937200 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:05.937207 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:05.937284 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:05.974302 2152780 cri.go:89] found id: ""
	I0819 13:22:05.974332 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.974342 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:05.974353 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:05.974367 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:06.012635 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:06.012664 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:06.064093 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:06.064129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:06.077792 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:06.077825 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:06.148954 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:06.148980 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:06.148999 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:08.731539 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:08.747311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:08.747414 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:08.782881 2152780 cri.go:89] found id: ""
	I0819 13:22:08.782913 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.782921 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:08.782927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:08.782990 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:08.817533 2152780 cri.go:89] found id: ""
	I0819 13:22:08.817569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.817580 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:08.817587 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:08.817657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:08.855614 2152780 cri.go:89] found id: ""
	I0819 13:22:08.855650 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.855664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:08.855672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:08.855742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:08.896405 2152780 cri.go:89] found id: ""
	I0819 13:22:08.896439 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.896447 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:08.896455 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:08.896525 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:08.936250 2152780 cri.go:89] found id: ""
	I0819 13:22:08.936279 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.936288 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:08.936296 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:08.936366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:08.976325 2152780 cri.go:89] found id: ""
	I0819 13:22:08.976357 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.976367 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:08.976376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:08.976446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:09.012921 2152780 cri.go:89] found id: ""
	I0819 13:22:09.012953 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.012962 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:09.012967 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:09.013026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:09.051385 2152780 cri.go:89] found id: ""
	I0819 13:22:09.051420 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.051432 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:09.051444 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:09.051461 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:09.105101 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:09.105146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:09.120289 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:09.120331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:09.216182 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:09.216220 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:09.216238 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:09.302832 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:09.302876 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:07.307100 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.803522 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.461843 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.961972 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.345855 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.845869 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.848734 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:11.861754 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:11.861838 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:11.897908 2152780 cri.go:89] found id: ""
	I0819 13:22:11.897940 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.897949 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:11.897956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:11.898013 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:11.935276 2152780 cri.go:89] found id: ""
	I0819 13:22:11.935309 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.935318 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:11.935326 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:11.935386 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:11.978577 2152780 cri.go:89] found id: ""
	I0819 13:22:11.978608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.978619 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:11.978626 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:11.978697 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:12.018464 2152780 cri.go:89] found id: ""
	I0819 13:22:12.018496 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.018505 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:12.018511 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:12.018567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:12.062371 2152780 cri.go:89] found id: ""
	I0819 13:22:12.062402 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.062412 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:12.062418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:12.062476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:12.102988 2152780 cri.go:89] found id: ""
	I0819 13:22:12.103017 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.103026 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:12.103033 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:12.103088 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:12.143553 2152780 cri.go:89] found id: ""
	I0819 13:22:12.143581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.143591 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:12.143599 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:12.143660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:12.187155 2152780 cri.go:89] found id: ""
	I0819 13:22:12.187192 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.187211 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:12.187224 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:12.187251 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:12.236028 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:12.236069 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:12.250047 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:12.250084 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:12.320586 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:12.320612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:12.320624 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:12.400274 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:12.400320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:14.939735 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:14.953283 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:14.953348 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:14.993818 2152780 cri.go:89] found id: ""
	I0819 13:22:14.993851 2152780 logs.go:276] 0 containers: []
	W0819 13:22:14.993862 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:14.993870 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:14.993940 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:15.035966 2152780 cri.go:89] found id: ""
	I0819 13:22:15.035997 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.036005 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:15.036012 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:15.036073 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:15.070730 2152780 cri.go:89] found id: ""
	I0819 13:22:15.070765 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.070777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:15.070785 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:15.070855 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:15.111830 2152780 cri.go:89] found id: ""
	I0819 13:22:15.111859 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.111868 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:15.111874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:15.111928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:15.144013 2152780 cri.go:89] found id: ""
	I0819 13:22:15.144039 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.144047 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:15.144053 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:15.144107 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:15.178050 2152780 cri.go:89] found id: ""
	I0819 13:22:15.178074 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.178083 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:15.178090 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:15.178145 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:15.212622 2152780 cri.go:89] found id: ""
	I0819 13:22:15.212647 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.212656 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:15.212662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:15.212716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:15.244744 2152780 cri.go:89] found id: ""
	I0819 13:22:15.244772 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.244780 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:15.244789 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:15.244804 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:15.284232 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:15.284266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:15.336738 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:15.336774 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:15.350801 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:15.350835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:15.426424 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:15.426453 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:15.426469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:11.809160 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.303978 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.962746 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:13.964030 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.344847 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.344951 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.009087 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:18.022428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:18.022500 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:18.062900 2152780 cri.go:89] found id: ""
	I0819 13:22:18.062930 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.062941 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:18.062949 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:18.063021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:18.096711 2152780 cri.go:89] found id: ""
	I0819 13:22:18.096752 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.096764 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:18.096773 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:18.096846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:18.129199 2152780 cri.go:89] found id: ""
	I0819 13:22:18.129226 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.129246 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:18.129253 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:18.129323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:18.161907 2152780 cri.go:89] found id: ""
	I0819 13:22:18.161944 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.161955 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:18.161963 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:18.162036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:18.198514 2152780 cri.go:89] found id: ""
	I0819 13:22:18.198557 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.198570 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:18.198577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:18.198648 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:18.230114 2152780 cri.go:89] found id: ""
	I0819 13:22:18.230142 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.230152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:18.230160 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:18.230221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:18.265610 2152780 cri.go:89] found id: ""
	I0819 13:22:18.265642 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.265654 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:18.265660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:18.265729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:18.298889 2152780 cri.go:89] found id: ""
	I0819 13:22:18.298929 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.298941 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:18.298952 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:18.298967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:18.381124 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:18.381165 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:18.422361 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:18.422394 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:18.482182 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:18.482215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:18.496054 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:18.496086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:18.568459 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:16.305471 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.804592 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.461760 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.961000 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:20.961568 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.844928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.345851 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.069389 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:21.082568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:21.082636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:21.117124 2152780 cri.go:89] found id: ""
	I0819 13:22:21.117158 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.117171 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:21.117180 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:21.117272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:21.151954 2152780 cri.go:89] found id: ""
	I0819 13:22:21.151981 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.151990 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:21.151996 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:21.152047 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:21.187411 2152780 cri.go:89] found id: ""
	I0819 13:22:21.187443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.187452 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:21.187461 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:21.187528 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:21.224118 2152780 cri.go:89] found id: ""
	I0819 13:22:21.224148 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.224156 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:21.224173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:21.224247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:21.261153 2152780 cri.go:89] found id: ""
	I0819 13:22:21.261189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.261198 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:21.261205 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:21.261274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:21.305404 2152780 cri.go:89] found id: ""
	I0819 13:22:21.305433 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.305444 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:21.305453 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:21.305534 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:21.358428 2152780 cri.go:89] found id: ""
	I0819 13:22:21.358462 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.358474 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:21.358482 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:21.358561 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:21.410971 2152780 cri.go:89] found id: ""
	I0819 13:22:21.411005 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.411017 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:21.411029 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:21.411045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:21.462929 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:21.462963 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:21.476419 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:21.476446 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:21.544095 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.544116 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:21.544130 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:21.622802 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:21.622841 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.171988 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:24.185545 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:24.185624 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:24.221604 2152780 cri.go:89] found id: ""
	I0819 13:22:24.221632 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.221640 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:24.221646 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:24.221703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:24.255866 2152780 cri.go:89] found id: ""
	I0819 13:22:24.255903 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.255914 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:24.255923 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:24.255993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:24.290338 2152780 cri.go:89] found id: ""
	I0819 13:22:24.290381 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.290393 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:24.290401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:24.290465 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:24.325653 2152780 cri.go:89] found id: ""
	I0819 13:22:24.325685 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.325693 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:24.325700 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:24.325753 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:24.364081 2152780 cri.go:89] found id: ""
	I0819 13:22:24.364116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.364128 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:24.364136 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:24.364211 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:24.403464 2152780 cri.go:89] found id: ""
	I0819 13:22:24.403504 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.403516 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:24.403524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:24.403597 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:24.441205 2152780 cri.go:89] found id: ""
	I0819 13:22:24.441244 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.441256 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:24.441264 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:24.441340 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:24.478294 2152780 cri.go:89] found id: ""
	I0819 13:22:24.478324 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.478332 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:24.478341 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:24.478354 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:24.562969 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:24.563012 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.609296 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:24.609332 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:24.665537 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:24.665579 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:24.680086 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:24.680128 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:24.743465 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.305466 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.804623 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.461141 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.461784 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.844225 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.844485 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.844552 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.244137 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:27.257619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:27.257695 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:27.290229 2152780 cri.go:89] found id: ""
	I0819 13:22:27.290265 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.290274 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:27.290280 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:27.290354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:27.327185 2152780 cri.go:89] found id: ""
	I0819 13:22:27.327216 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.327225 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:27.327231 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:27.327302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:27.362949 2152780 cri.go:89] found id: ""
	I0819 13:22:27.362978 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.362999 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:27.363007 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:27.363074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:27.404085 2152780 cri.go:89] found id: ""
	I0819 13:22:27.404116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.404125 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:27.404132 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:27.404237 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:27.439884 2152780 cri.go:89] found id: ""
	I0819 13:22:27.439922 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.439933 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:27.439942 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:27.440006 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:27.473933 2152780 cri.go:89] found id: ""
	I0819 13:22:27.473961 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.473970 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:27.473976 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:27.474033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:27.510093 2152780 cri.go:89] found id: ""
	I0819 13:22:27.510121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.510131 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:27.510137 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:27.510188 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:27.549178 2152780 cri.go:89] found id: ""
	I0819 13:22:27.549205 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.549213 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:27.549223 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:27.549236 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:27.605886 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:27.605925 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:27.619506 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:27.619537 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:27.686821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:27.686849 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:27.686867 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:27.761275 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:27.761315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:30.301382 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:30.315140 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:30.315221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:30.352687 2152780 cri.go:89] found id: ""
	I0819 13:22:30.352710 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.352717 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:30.352723 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:30.352775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:30.388577 2152780 cri.go:89] found id: ""
	I0819 13:22:30.388604 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.388612 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:30.388618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:30.388671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:30.423937 2152780 cri.go:89] found id: ""
	I0819 13:22:30.423967 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.423977 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:30.423983 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:30.424039 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:30.456789 2152780 cri.go:89] found id: ""
	I0819 13:22:30.456823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.456834 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:30.456852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:30.456928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:30.495741 2152780 cri.go:89] found id: ""
	I0819 13:22:30.495778 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.495792 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:30.495801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:30.495856 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:30.532919 2152780 cri.go:89] found id: ""
	I0819 13:22:30.532947 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.532959 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:30.532972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:30.533036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:30.574556 2152780 cri.go:89] found id: ""
	I0819 13:22:30.574591 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.574602 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:30.574615 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:30.574685 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:30.610248 2152780 cri.go:89] found id: ""
	I0819 13:22:30.610280 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.610291 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:30.610303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:30.610319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:30.664302 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:30.664356 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:30.678529 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:30.678571 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:30.753412 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:30.753441 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:30.753460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:26.304403 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:28.305270 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.804282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.462817 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:29.463222 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.344268 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:32.344384 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.831754 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:30.831795 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:33.375305 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:33.390064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:33.390139 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:33.426250 2152780 cri.go:89] found id: ""
	I0819 13:22:33.426281 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.426292 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:33.426301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:33.426356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:33.464229 2152780 cri.go:89] found id: ""
	I0819 13:22:33.464255 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.464263 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:33.464270 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:33.464329 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:33.501098 2152780 cri.go:89] found id: ""
	I0819 13:22:33.501134 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.501145 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:33.501154 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:33.501223 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:33.535617 2152780 cri.go:89] found id: ""
	I0819 13:22:33.535651 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.535662 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:33.535670 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:33.535738 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:33.570553 2152780 cri.go:89] found id: ""
	I0819 13:22:33.570600 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.570613 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:33.570622 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:33.570694 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:33.605414 2152780 cri.go:89] found id: ""
	I0819 13:22:33.605454 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.605466 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:33.605477 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:33.605545 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:33.637564 2152780 cri.go:89] found id: ""
	I0819 13:22:33.637599 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.637610 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:33.637618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:33.637681 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:33.671403 2152780 cri.go:89] found id: ""
	I0819 13:22:33.671438 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.671450 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:33.671463 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:33.671482 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:33.724121 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:33.724159 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:33.737041 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:33.737070 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:33.808305 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:33.808329 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:33.808342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:33.886089 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:33.886136 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:32.806000 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:35.303503 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:31.463294 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:33.962798 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:34.344950 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.845547 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.424035 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:36.439329 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:36.439407 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:36.486348 2152780 cri.go:89] found id: ""
	I0819 13:22:36.486375 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.486383 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:36.486391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:36.486455 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:36.522796 2152780 cri.go:89] found id: ""
	I0819 13:22:36.522823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.522834 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:36.522843 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:36.522909 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:36.556528 2152780 cri.go:89] found id: ""
	I0819 13:22:36.556561 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.556573 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:36.556580 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:36.556652 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:36.592545 2152780 cri.go:89] found id: ""
	I0819 13:22:36.592581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.592591 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:36.592600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:36.592671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:36.628668 2152780 cri.go:89] found id: ""
	I0819 13:22:36.628696 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.628705 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:36.628711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:36.628765 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:36.660783 2152780 cri.go:89] found id: ""
	I0819 13:22:36.660817 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.660825 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:36.660832 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:36.660886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:36.694853 2152780 cri.go:89] found id: ""
	I0819 13:22:36.694883 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.694893 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:36.694901 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:36.694969 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:36.727767 2152780 cri.go:89] found id: ""
	I0819 13:22:36.727797 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.727805 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:36.727816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:36.727830 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:36.741032 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:36.741064 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:36.816015 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:36.816045 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:36.816062 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:36.896272 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:36.896319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:36.933895 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:36.933936 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:39.495808 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:39.511071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:39.511158 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:39.551412 2152780 cri.go:89] found id: ""
	I0819 13:22:39.551447 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.551458 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:39.551467 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:39.551523 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:39.588796 2152780 cri.go:89] found id: ""
	I0819 13:22:39.588827 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.588836 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:39.588842 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:39.588904 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:39.627115 2152780 cri.go:89] found id: ""
	I0819 13:22:39.627144 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.627153 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:39.627159 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:39.627222 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:39.662372 2152780 cri.go:89] found id: ""
	I0819 13:22:39.662405 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.662413 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:39.662420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:39.662478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:39.696239 2152780 cri.go:89] found id: ""
	I0819 13:22:39.696272 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.696282 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:39.696298 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:39.696363 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:39.731334 2152780 cri.go:89] found id: ""
	I0819 13:22:39.731372 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.731383 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:39.731391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:39.731454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:39.764368 2152780 cri.go:89] found id: ""
	I0819 13:22:39.764400 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.764412 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:39.764420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:39.764488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:39.798994 2152780 cri.go:89] found id: ""
	I0819 13:22:39.799026 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.799038 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:39.799050 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:39.799066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:39.812935 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:39.812965 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:39.882867 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:39.882898 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:39.882913 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:39.968067 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:39.968102 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:40.007122 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:40.007158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:37.304584 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.305417 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.462377 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:38.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.344771 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.355221 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:42.560811 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:42.574968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:42.575034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:42.609476 2152780 cri.go:89] found id: ""
	I0819 13:22:42.609516 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.609529 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:42.609537 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:42.609607 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:42.650763 2152780 cri.go:89] found id: ""
	I0819 13:22:42.650794 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.650804 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:42.650812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:42.650885 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:42.686330 2152780 cri.go:89] found id: ""
	I0819 13:22:42.686358 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.686369 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:42.686377 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:42.686446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:42.720937 2152780 cri.go:89] found id: ""
	I0819 13:22:42.720963 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.720971 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:42.720982 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:42.721036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:42.755079 2152780 cri.go:89] found id: ""
	I0819 13:22:42.755122 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.755132 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:42.755141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:42.755212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:42.787680 2152780 cri.go:89] found id: ""
	I0819 13:22:42.787715 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.787727 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:42.787738 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:42.787807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:42.824768 2152780 cri.go:89] found id: ""
	I0819 13:22:42.824799 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.824807 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:42.824813 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:42.824874 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:42.859902 2152780 cri.go:89] found id: ""
	I0819 13:22:42.859935 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.859946 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:42.859957 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:42.859972 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:42.911169 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:42.911215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:42.925108 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:42.925140 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:43.002506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:43.002557 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:43.002574 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:43.084408 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:43.084451 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:45.625360 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:45.638948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:45.639033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:45.675838 2152780 cri.go:89] found id: ""
	I0819 13:22:45.675874 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.675887 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:45.675895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:45.675971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:45.708726 2152780 cri.go:89] found id: ""
	I0819 13:22:45.708757 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.708768 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:45.708776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:45.708842 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:45.749704 2152780 cri.go:89] found id: ""
	I0819 13:22:45.749739 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.749752 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:45.749769 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:45.749835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:45.786426 2152780 cri.go:89] found id: ""
	I0819 13:22:45.786457 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.786469 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:45.786478 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:45.786547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:41.803829 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.804465 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.805120 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.462336 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.961582 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.845489 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.845943 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.822363 2152780 cri.go:89] found id: ""
	I0819 13:22:45.822395 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.822403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:45.822408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:45.822472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:45.856308 2152780 cri.go:89] found id: ""
	I0819 13:22:45.856337 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.856346 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:45.856353 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:45.856421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:45.893379 2152780 cri.go:89] found id: ""
	I0819 13:22:45.893417 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.893427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:45.893435 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:45.893503 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:45.929841 2152780 cri.go:89] found id: ""
	I0819 13:22:45.929868 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.929879 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:45.929890 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:45.929943 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:45.980392 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:45.980432 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:45.994781 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:45.994809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:46.065633 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:46.065658 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:46.065675 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:46.140938 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:46.140976 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.680055 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:48.692972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:48.693053 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:48.736198 2152780 cri.go:89] found id: ""
	I0819 13:22:48.736232 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.736244 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:48.736252 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:48.736332 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:48.775819 2152780 cri.go:89] found id: ""
	I0819 13:22:48.775855 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.775872 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:48.775881 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:48.775956 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:48.814090 2152780 cri.go:89] found id: ""
	I0819 13:22:48.814121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.814130 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:48.814139 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:48.814207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:48.849630 2152780 cri.go:89] found id: ""
	I0819 13:22:48.849656 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.849665 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:48.849671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:48.849730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:48.887945 2152780 cri.go:89] found id: ""
	I0819 13:22:48.887982 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.887995 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:48.888003 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:48.888072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:48.924105 2152780 cri.go:89] found id: ""
	I0819 13:22:48.924145 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.924158 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:48.924184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:48.924266 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:48.960481 2152780 cri.go:89] found id: ""
	I0819 13:22:48.960521 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.960532 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:48.960540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:48.960611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:48.998826 2152780 cri.go:89] found id: ""
	I0819 13:22:48.998856 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.998865 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:48.998874 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:48.998888 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:49.053064 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:49.053103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:49.067115 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:49.067154 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:49.146731 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:49.146753 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:49.146767 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:49.232228 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:49.232268 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.303393 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.305013 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:46.461905 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.961570 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.961743 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.344068 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.844551 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:51.775692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:51.789766 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:51.789848 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:51.830997 2152780 cri.go:89] found id: ""
	I0819 13:22:51.831029 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.831038 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:51.831045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:51.831105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:51.870122 2152780 cri.go:89] found id: ""
	I0819 13:22:51.870150 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.870160 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:51.870168 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:51.870247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:51.911170 2152780 cri.go:89] found id: ""
	I0819 13:22:51.911198 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.911206 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:51.911216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:51.911280 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:51.946159 2152780 cri.go:89] found id: ""
	I0819 13:22:51.946197 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.946218 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:51.946225 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:51.946300 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:51.981131 2152780 cri.go:89] found id: ""
	I0819 13:22:51.981160 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.981170 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:51.981176 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:51.981251 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:52.024417 2152780 cri.go:89] found id: ""
	I0819 13:22:52.024456 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.024477 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:52.024486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:52.024567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:52.062519 2152780 cri.go:89] found id: ""
	I0819 13:22:52.062552 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.062563 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:52.062571 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:52.062640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:52.106259 2152780 cri.go:89] found id: ""
	I0819 13:22:52.106291 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.106302 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:52.106314 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:52.106338 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:52.158681 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:52.158723 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:52.172489 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:52.172523 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:52.236862 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:52.236892 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:52.236910 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:52.319287 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:52.319331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:54.864061 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:54.878753 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:54.878834 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:54.913195 2152780 cri.go:89] found id: ""
	I0819 13:22:54.913225 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.913240 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:54.913247 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:54.913314 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:54.954867 2152780 cri.go:89] found id: ""
	I0819 13:22:54.954909 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.954922 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:54.954930 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:54.954995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:54.992570 2152780 cri.go:89] found id: ""
	I0819 13:22:54.992601 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.992610 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:54.992616 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:54.992683 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:55.027056 2152780 cri.go:89] found id: ""
	I0819 13:22:55.027090 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.027101 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:55.027108 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:55.027190 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:55.061941 2152780 cri.go:89] found id: ""
	I0819 13:22:55.061973 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.061985 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:55.061994 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:55.062061 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:55.100414 2152780 cri.go:89] found id: ""
	I0819 13:22:55.100443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.100455 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:55.100465 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:55.100535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:55.143797 2152780 cri.go:89] found id: ""
	I0819 13:22:55.143836 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.143848 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:55.143856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:55.143929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:55.187540 2152780 cri.go:89] found id: ""
	I0819 13:22:55.187569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.187577 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:55.187587 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:55.187600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:55.241075 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:55.241110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:55.254813 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:55.254846 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:55.325934 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:55.325955 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:55.325974 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:55.404725 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:55.404775 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:52.804423 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.303884 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:52.963783 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.461517 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:53.345242 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.346054 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.843535 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.949501 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:57.964664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:57.964730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:58.005114 2152780 cri.go:89] found id: ""
	I0819 13:22:58.005153 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.005164 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:58.005173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:58.005274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:58.038596 2152780 cri.go:89] found id: ""
	I0819 13:22:58.038629 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.038639 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:58.038644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:58.038700 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:58.081597 2152780 cri.go:89] found id: ""
	I0819 13:22:58.081631 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.081640 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:58.081650 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:58.081735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:58.120356 2152780 cri.go:89] found id: ""
	I0819 13:22:58.120389 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.120400 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:58.120408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:58.120478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:58.156313 2152780 cri.go:89] found id: ""
	I0819 13:22:58.156346 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.156358 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:58.156366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:58.156442 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:58.192198 2152780 cri.go:89] found id: ""
	I0819 13:22:58.192238 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.192249 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:58.192257 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:58.192335 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:58.226574 2152780 cri.go:89] found id: ""
	I0819 13:22:58.226608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.226622 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:58.226631 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:58.226705 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:58.260060 2152780 cri.go:89] found id: ""
	I0819 13:22:58.260088 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.260096 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:58.260106 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:58.260119 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:58.338242 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:58.338289 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:58.382245 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:58.382286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:58.436273 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:58.436314 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:58.452143 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:58.452215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:58.530133 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:57.305258 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.804142 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.462332 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.962480 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.844922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:01.030732 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:01.044083 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:01.044155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:01.083082 2152780 cri.go:89] found id: ""
	I0819 13:23:01.083114 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.083122 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:01.083129 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:01.083197 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:01.122645 2152780 cri.go:89] found id: ""
	I0819 13:23:01.122680 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.122700 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:01.122708 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:01.122777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:01.163144 2152780 cri.go:89] found id: ""
	I0819 13:23:01.163175 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.163187 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:01.163194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:01.163262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:01.201545 2152780 cri.go:89] found id: ""
	I0819 13:23:01.201587 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.201597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:01.201605 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:01.201693 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:01.237047 2152780 cri.go:89] found id: ""
	I0819 13:23:01.237075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.237086 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:01.237094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:01.237162 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:01.276531 2152780 cri.go:89] found id: ""
	I0819 13:23:01.276560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.276571 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:01.276577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:01.276633 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:01.315639 2152780 cri.go:89] found id: ""
	I0819 13:23:01.315668 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.315676 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:01.315682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:01.315734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:01.351942 2152780 cri.go:89] found id: ""
	I0819 13:23:01.351969 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.351980 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:01.351990 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:01.352006 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:01.403068 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:01.403111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:01.416608 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:01.416639 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:01.493450 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:01.493483 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:01.493501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:01.577709 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:01.577750 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:04.146865 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:04.160366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:04.160431 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:04.194463 2152780 cri.go:89] found id: ""
	I0819 13:23:04.194494 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.194504 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:04.194510 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:04.194567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:04.228327 2152780 cri.go:89] found id: ""
	I0819 13:23:04.228351 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.228359 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:04.228364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:04.228413 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:04.261613 2152780 cri.go:89] found id: ""
	I0819 13:23:04.261645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.261656 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:04.261664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:04.261740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:04.294583 2152780 cri.go:89] found id: ""
	I0819 13:23:04.294617 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.294628 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:04.294636 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:04.294712 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:04.328537 2152780 cri.go:89] found id: ""
	I0819 13:23:04.328566 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.328576 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:04.328585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:04.328657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:04.361865 2152780 cri.go:89] found id: ""
	I0819 13:23:04.361906 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.361917 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:04.361925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:04.361988 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:04.396308 2152780 cri.go:89] found id: ""
	I0819 13:23:04.396343 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.396354 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:04.396361 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:04.396433 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:04.429551 2152780 cri.go:89] found id: ""
	I0819 13:23:04.429580 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.429591 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:04.429602 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:04.429618 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:04.479557 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:04.479600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:04.493746 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:04.493781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:04.563309 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:04.563333 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:04.563347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:04.644413 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:04.644455 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:02.305047 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.804564 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.460986 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.461977 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.845028 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.345778 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.195678 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:07.209255 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:07.209323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:07.245267 2152780 cri.go:89] found id: ""
	I0819 13:23:07.245295 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.245306 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:07.245324 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:07.245394 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:07.283190 2152780 cri.go:89] found id: ""
	I0819 13:23:07.283223 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.283234 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:07.283242 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:07.283312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:07.330025 2152780 cri.go:89] found id: ""
	I0819 13:23:07.330053 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.330065 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:07.330072 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:07.330137 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:07.381649 2152780 cri.go:89] found id: ""
	I0819 13:23:07.381675 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.381684 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:07.381690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:07.381744 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:07.436759 2152780 cri.go:89] found id: ""
	I0819 13:23:07.436794 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.436806 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:07.436814 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:07.436890 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:07.475843 2152780 cri.go:89] found id: ""
	I0819 13:23:07.475876 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.475885 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:07.475891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:07.475954 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:07.510531 2152780 cri.go:89] found id: ""
	I0819 13:23:07.510560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.510569 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:07.510575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:07.510641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:07.546041 2152780 cri.go:89] found id: ""
	I0819 13:23:07.546075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.546087 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:07.546100 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:07.546118 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:07.559102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:07.559137 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:07.631002 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:07.631030 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:07.631045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:07.705711 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:07.705751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:07.745114 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:07.745148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.302515 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:10.316028 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:10.316108 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:10.357431 2152780 cri.go:89] found id: ""
	I0819 13:23:10.357468 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.357477 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:10.357483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:10.357546 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:10.393541 2152780 cri.go:89] found id: ""
	I0819 13:23:10.393568 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.393577 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:10.393583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:10.393651 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:10.429651 2152780 cri.go:89] found id: ""
	I0819 13:23:10.429684 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.429692 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:10.429698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:10.429752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:10.469626 2152780 cri.go:89] found id: ""
	I0819 13:23:10.469690 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.469701 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:10.469706 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:10.469761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:10.503015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.503048 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.503058 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:10.503065 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:10.503133 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:10.538127 2152780 cri.go:89] found id: ""
	I0819 13:23:10.538168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.538180 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:10.538188 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:10.538260 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:10.573015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.573050 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.573058 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:10.573064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:10.573134 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:10.608094 2152780 cri.go:89] found id: ""
	I0819 13:23:10.608125 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.608136 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:10.608150 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:10.608185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:10.688926 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:10.688967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:10.724554 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:10.724588 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.777619 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:10.777652 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:10.791106 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:10.791143 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:23:07.305198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.804759 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:06.961224 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.460095 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.843726 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.845072 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	W0819 13:23:10.865964 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.366960 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:13.381903 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:13.381973 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:13.420589 2152780 cri.go:89] found id: ""
	I0819 13:23:13.420620 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.420630 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:13.420637 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:13.420724 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:13.457809 2152780 cri.go:89] found id: ""
	I0819 13:23:13.457840 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.457850 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:13.457863 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:13.457927 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:13.496310 2152780 cri.go:89] found id: ""
	I0819 13:23:13.496347 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.496359 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:13.496368 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:13.496436 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:13.532057 2152780 cri.go:89] found id: ""
	I0819 13:23:13.532089 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.532097 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:13.532103 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:13.532183 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:13.575780 2152780 cri.go:89] found id: ""
	I0819 13:23:13.575814 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.575823 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:13.575829 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:13.575896 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:13.611484 2152780 cri.go:89] found id: ""
	I0819 13:23:13.611510 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.611520 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:13.611528 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:13.611594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:13.647456 2152780 cri.go:89] found id: ""
	I0819 13:23:13.647483 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.647491 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:13.647497 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:13.647569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:13.681181 2152780 cri.go:89] found id: ""
	I0819 13:23:13.681212 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.681224 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:13.681237 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:13.681266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:13.694680 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:13.694709 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:13.767546 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.767576 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:13.767597 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:13.851609 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:13.851649 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:13.893739 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:13.893778 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:12.304391 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.304840 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.962978 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.461299 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.344830 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.844720 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.446686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:16.459850 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:16.459928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:16.498671 2152780 cri.go:89] found id: ""
	I0819 13:23:16.498703 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.498712 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:16.498718 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:16.498775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:16.534655 2152780 cri.go:89] found id: ""
	I0819 13:23:16.534691 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.534704 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:16.534716 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:16.534781 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:16.571279 2152780 cri.go:89] found id: ""
	I0819 13:23:16.571317 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.571330 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:16.571337 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:16.571402 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:16.604843 2152780 cri.go:89] found id: ""
	I0819 13:23:16.604878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.604889 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:16.604897 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:16.604963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:16.641150 2152780 cri.go:89] found id: ""
	I0819 13:23:16.641181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.641190 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:16.641201 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:16.641272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:16.676737 2152780 cri.go:89] found id: ""
	I0819 13:23:16.676763 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.676773 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:16.676782 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:16.676847 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:16.709876 2152780 cri.go:89] found id: ""
	I0819 13:23:16.709908 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.709920 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:16.709928 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:16.709991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:16.745249 2152780 cri.go:89] found id: ""
	I0819 13:23:16.745277 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.745293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:16.745304 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:16.745320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:16.796364 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:16.796400 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:16.811859 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:16.811887 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:16.886118 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:16.886144 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:16.886158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.968304 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:16.968349 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.511448 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:19.523952 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:19.524023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:19.556661 2152780 cri.go:89] found id: ""
	I0819 13:23:19.556696 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.556708 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:19.556715 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:19.556784 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:19.595956 2152780 cri.go:89] found id: ""
	I0819 13:23:19.595986 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.595995 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:19.596001 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:19.596055 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:19.631993 2152780 cri.go:89] found id: ""
	I0819 13:23:19.632021 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.632030 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:19.632038 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:19.632126 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:19.667871 2152780 cri.go:89] found id: ""
	I0819 13:23:19.667924 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.667935 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:19.667944 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:19.668007 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:19.703585 2152780 cri.go:89] found id: ""
	I0819 13:23:19.703615 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.703624 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:19.703630 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:19.703684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:19.737220 2152780 cri.go:89] found id: ""
	I0819 13:23:19.737252 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.737263 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:19.737272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:19.737343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:19.775526 2152780 cri.go:89] found id: ""
	I0819 13:23:19.775570 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.775587 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:19.775595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:19.775671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:19.810611 2152780 cri.go:89] found id: ""
	I0819 13:23:19.810645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.810656 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:19.810667 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:19.810682 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.855138 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:19.855167 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:19.906988 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:19.907025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:19.920451 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:19.920481 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:19.989054 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:19.989086 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:19.989101 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.803686 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.804090 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.804422 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.463889 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.961085 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.961248 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.845050 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:21.344704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.571624 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:22.585585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:22.585667 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:22.620456 2152780 cri.go:89] found id: ""
	I0819 13:23:22.620488 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.620497 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:22.620509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:22.620562 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:22.658971 2152780 cri.go:89] found id: ""
	I0819 13:23:22.659005 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.659017 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:22.659024 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:22.659094 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:22.696236 2152780 cri.go:89] found id: ""
	I0819 13:23:22.696269 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.696281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:22.696290 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:22.696354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:22.735260 2152780 cri.go:89] found id: ""
	I0819 13:23:22.735296 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.735306 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:22.735313 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:22.735365 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.770593 2152780 cri.go:89] found id: ""
	I0819 13:23:22.770625 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.770637 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:22.770644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:22.770716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:22.806348 2152780 cri.go:89] found id: ""
	I0819 13:23:22.806375 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.806400 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:22.806418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:22.806489 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:22.843812 2152780 cri.go:89] found id: ""
	I0819 13:23:22.843841 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.843850 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:22.843858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:22.843929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:22.880607 2152780 cri.go:89] found id: ""
	I0819 13:23:22.880641 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.880655 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:22.880669 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:22.880695 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:22.930816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:22.930855 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:22.946795 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:22.946829 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:23.021065 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:23.021094 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:23.021110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:23.098083 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:23.098129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:25.641104 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:25.654226 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:25.654297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:25.689403 2152780 cri.go:89] found id: ""
	I0819 13:23:25.689429 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.689437 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:25.689445 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:25.689496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:25.725340 2152780 cri.go:89] found id: ""
	I0819 13:23:25.725380 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.725393 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:25.725401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:25.725472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:25.759133 2152780 cri.go:89] found id: ""
	I0819 13:23:25.759168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.759178 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:25.759184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:25.759253 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:25.793605 2152780 cri.go:89] found id: ""
	I0819 13:23:25.793636 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.793649 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:25.793657 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:25.793731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.804810 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.305119 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.460663 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:23.344790 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.344975 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.843772 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.828355 2152780 cri.go:89] found id: ""
	I0819 13:23:25.828391 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.828403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:25.828411 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:25.828488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:25.869621 2152780 cri.go:89] found id: ""
	I0819 13:23:25.869658 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.869669 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:25.869676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:25.869731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:25.907749 2152780 cri.go:89] found id: ""
	I0819 13:23:25.907786 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.907795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:25.907801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:25.907870 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:25.942443 2152780 cri.go:89] found id: ""
	I0819 13:23:25.942472 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.942484 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:25.942497 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:25.942515 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:26.016016 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:26.016044 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:26.016058 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:26.097427 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:26.097475 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:26.140275 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:26.140320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:26.189858 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:26.189894 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:28.705027 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:28.717999 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:28.718071 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:28.752233 2152780 cri.go:89] found id: ""
	I0819 13:23:28.752263 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.752279 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:28.752286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:28.752343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:28.786033 2152780 cri.go:89] found id: ""
	I0819 13:23:28.786071 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.786080 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:28.786086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:28.786143 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:28.821105 2152780 cri.go:89] found id: ""
	I0819 13:23:28.821135 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.821165 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:28.821173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:28.821243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:28.859179 2152780 cri.go:89] found id: ""
	I0819 13:23:28.859226 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.859237 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:28.859244 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:28.859319 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:28.894213 2152780 cri.go:89] found id: ""
	I0819 13:23:28.894250 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.894263 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:28.894272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:28.894351 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:28.927807 2152780 cri.go:89] found id: ""
	I0819 13:23:28.927836 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.927848 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:28.927856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:28.927924 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:28.965430 2152780 cri.go:89] found id: ""
	I0819 13:23:28.965464 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.965476 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:28.965485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:28.965555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:29.008894 2152780 cri.go:89] found id: ""
	I0819 13:23:29.008920 2152780 logs.go:276] 0 containers: []
	W0819 13:23:29.008928 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:29.008938 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:29.008951 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:29.089617 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:29.089644 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:29.089657 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:29.173271 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:29.173310 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:29.216057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:29.216098 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:29.267076 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:29.267131 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:27.804872 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:30.304909 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.461788 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.961548 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.844049 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.846007 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.782574 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:31.797583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:31.797655 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:31.836889 2152780 cri.go:89] found id: ""
	I0819 13:23:31.836926 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.836939 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:31.836948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:31.837021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:31.875693 2152780 cri.go:89] found id: ""
	I0819 13:23:31.875725 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.875734 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:31.875740 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:31.875813 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:31.909847 2152780 cri.go:89] found id: ""
	I0819 13:23:31.909878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.909887 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:31.909895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:31.909971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:31.944122 2152780 cri.go:89] found id: ""
	I0819 13:23:31.944155 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.944184 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:31.944193 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:31.944265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:31.979342 2152780 cri.go:89] found id: ""
	I0819 13:23:31.979373 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.979383 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:31.979390 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:31.979461 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:32.013001 2152780 cri.go:89] found id: ""
	I0819 13:23:32.013049 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.013060 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:32.013067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:32.013148 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:32.047975 2152780 cri.go:89] found id: ""
	I0819 13:23:32.048003 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.048012 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:32.048017 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:32.048074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:32.082151 2152780 cri.go:89] found id: ""
	I0819 13:23:32.082181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.082192 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:32.082205 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:32.082241 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:32.120254 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:32.120286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.171594 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:32.171634 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:32.185654 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:32.185687 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:32.251542 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:32.251568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:32.251586 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:34.831302 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:34.846902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:34.846980 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:34.883697 2152780 cri.go:89] found id: ""
	I0819 13:23:34.883731 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.883747 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:34.883756 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:34.883821 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:34.918184 2152780 cri.go:89] found id: ""
	I0819 13:23:34.918220 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.918232 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:34.918240 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:34.918321 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:34.951621 2152780 cri.go:89] found id: ""
	I0819 13:23:34.951653 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.951664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:34.951673 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:34.951735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:34.987330 2152780 cri.go:89] found id: ""
	I0819 13:23:34.987362 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.987373 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:34.987381 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:34.987454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:35.022480 2152780 cri.go:89] found id: ""
	I0819 13:23:35.022518 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.022531 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:35.022539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:35.022606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:35.059717 2152780 cri.go:89] found id: ""
	I0819 13:23:35.059753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.059769 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:35.059776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:35.059835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:35.098877 2152780 cri.go:89] found id: ""
	I0819 13:23:35.098912 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.098924 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:35.098932 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:35.098995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:35.137392 2152780 cri.go:89] found id: ""
	I0819 13:23:35.137422 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.137429 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:35.137438 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:35.137456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:35.151134 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:35.151177 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:35.216821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:35.216852 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:35.216870 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:35.300662 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:35.300717 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:35.344627 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:35.344661 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.803063 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.804202 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.962287 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.460360 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.344876 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.345734 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:37.899152 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:37.913091 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:37.913161 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:37.949684 2152780 cri.go:89] found id: ""
	I0819 13:23:37.949713 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.949722 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:37.949728 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:37.949791 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:37.984480 2152780 cri.go:89] found id: ""
	I0819 13:23:37.984509 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.984518 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:37.984524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:37.984591 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:38.022228 2152780 cri.go:89] found id: ""
	I0819 13:23:38.022264 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.022277 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:38.022285 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:38.022360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:38.056117 2152780 cri.go:89] found id: ""
	I0819 13:23:38.056147 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.056159 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:38.056184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:38.056252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:38.089157 2152780 cri.go:89] found id: ""
	I0819 13:23:38.089187 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.089204 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:38.089212 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:38.089279 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:38.123335 2152780 cri.go:89] found id: ""
	I0819 13:23:38.123369 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.123378 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:38.123384 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:38.123444 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:38.159555 2152780 cri.go:89] found id: ""
	I0819 13:23:38.159584 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.159596 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:38.159604 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:38.159671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:38.200728 2152780 cri.go:89] found id: ""
	I0819 13:23:38.200765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.200777 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:38.200789 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:38.200809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:38.251652 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:38.251684 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:38.264834 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:38.264862 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:38.341259 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:38.341287 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:38.341304 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:38.417381 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:38.417423 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:36.805360 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:39.303731 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.461458 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.962012 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.347922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.845195 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.959227 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:40.972845 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:40.972937 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:41.009180 2152780 cri.go:89] found id: ""
	I0819 13:23:41.009219 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.009233 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:41.009241 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:41.009313 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:41.044055 2152780 cri.go:89] found id: ""
	I0819 13:23:41.044088 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.044100 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:41.044109 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:41.044203 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:41.081073 2152780 cri.go:89] found id: ""
	I0819 13:23:41.081098 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.081108 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:41.081114 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:41.081166 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:41.113973 2152780 cri.go:89] found id: ""
	I0819 13:23:41.114011 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.114024 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:41.114031 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:41.114105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:41.149480 2152780 cri.go:89] found id: ""
	I0819 13:23:41.149515 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.149527 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:41.149539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:41.149632 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:41.183756 2152780 cri.go:89] found id: ""
	I0819 13:23:41.183788 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.183800 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:41.183808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:41.183884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:41.217629 2152780 cri.go:89] found id: ""
	I0819 13:23:41.217670 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.217683 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:41.217691 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:41.217755 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:41.254948 2152780 cri.go:89] found id: ""
	I0819 13:23:41.254987 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.255000 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:41.255011 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:41.255025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:41.293080 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:41.293116 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.349391 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:41.349437 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:41.363720 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:41.363751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:41.459273 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:41.459299 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:41.459315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.043239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:44.057981 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:44.058072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:44.091216 2152780 cri.go:89] found id: ""
	I0819 13:23:44.091247 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.091256 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:44.091263 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:44.091336 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:44.128729 2152780 cri.go:89] found id: ""
	I0819 13:23:44.128761 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.128772 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:44.128779 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:44.128865 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:44.167730 2152780 cri.go:89] found id: ""
	I0819 13:23:44.167765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.167777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:44.167784 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:44.167852 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:44.202323 2152780 cri.go:89] found id: ""
	I0819 13:23:44.202355 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.202366 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:44.202373 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:44.202451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:44.237030 2152780 cri.go:89] found id: ""
	I0819 13:23:44.237074 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.237088 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:44.237096 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:44.237164 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:44.273319 2152780 cri.go:89] found id: ""
	I0819 13:23:44.273356 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.273370 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:44.273378 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:44.273459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:44.313267 2152780 cri.go:89] found id: ""
	I0819 13:23:44.313300 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.313313 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:44.313322 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:44.313401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:44.353073 2152780 cri.go:89] found id: ""
	I0819 13:23:44.353099 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.353107 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:44.353115 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:44.353129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:44.366102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:44.366147 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:44.438580 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:44.438612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:44.438628 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.518200 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:44.518250 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:44.559362 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:44.559396 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.308823 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.804769 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:41.461933 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.961713 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.344606 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:45.344727 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.345821 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.114604 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:47.128989 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:47.129067 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:47.174286 2152780 cri.go:89] found id: ""
	I0819 13:23:47.174316 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.174324 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:47.174331 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:47.174391 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:47.208244 2152780 cri.go:89] found id: ""
	I0819 13:23:47.208282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.208291 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:47.208297 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:47.208360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:47.245609 2152780 cri.go:89] found id: ""
	I0819 13:23:47.245638 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.245645 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:47.245651 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:47.245703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:47.280617 2152780 cri.go:89] found id: ""
	I0819 13:23:47.280646 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.280655 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:47.280661 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:47.280726 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:47.317662 2152780 cri.go:89] found id: ""
	I0819 13:23:47.317695 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.317708 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:47.317719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:47.317789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:47.359113 2152780 cri.go:89] found id: ""
	I0819 13:23:47.359143 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.359152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:47.359158 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:47.359243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:47.396520 2152780 cri.go:89] found id: ""
	I0819 13:23:47.396550 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.396561 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:47.396568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:47.396636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:47.433043 2152780 cri.go:89] found id: ""
	I0819 13:23:47.433075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.433086 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:47.433097 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:47.433111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:47.491984 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:47.492023 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:47.505775 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:47.505805 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:47.573541 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:47.573568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:47.573584 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:47.657102 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:47.657148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.201855 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:50.216186 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:50.216250 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:50.254505 2152780 cri.go:89] found id: ""
	I0819 13:23:50.254542 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.254554 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:50.254561 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:50.254627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:50.288971 2152780 cri.go:89] found id: ""
	I0819 13:23:50.289006 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.289015 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:50.289021 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:50.289086 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:50.325243 2152780 cri.go:89] found id: ""
	I0819 13:23:50.325282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.325293 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:50.325302 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:50.325377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:50.361776 2152780 cri.go:89] found id: ""
	I0819 13:23:50.361810 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.361822 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:50.361830 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:50.361892 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:50.395380 2152780 cri.go:89] found id: ""
	I0819 13:23:50.395412 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.395422 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:50.395428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:50.395496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:50.429299 2152780 cri.go:89] found id: ""
	I0819 13:23:50.429336 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.429348 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:50.429356 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:50.429423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:50.471214 2152780 cri.go:89] found id: ""
	I0819 13:23:50.471236 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.471243 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:50.471249 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:50.471302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:50.510258 2152780 cri.go:89] found id: ""
	I0819 13:23:50.510298 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.510309 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:50.510322 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:50.510342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.547398 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:50.547431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:50.598425 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:50.598466 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:50.613308 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:50.613341 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:50.685803 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:50.685834 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:50.685851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:46.303155 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.305240 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.804319 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:46.461162 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.961356 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.961557 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:49.844368 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.345173 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:53.264007 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:53.277505 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:53.277596 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:53.320407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.320436 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.320444 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:53.320450 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:53.320508 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:53.364578 2152780 cri.go:89] found id: ""
	I0819 13:23:53.364628 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.364650 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:53.364660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:53.364732 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:53.404407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.404437 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.404446 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:53.404452 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:53.404522 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:53.440743 2152780 cri.go:89] found id: ""
	I0819 13:23:53.440775 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.440784 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:53.440791 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:53.440858 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:53.476151 2152780 cri.go:89] found id: ""
	I0819 13:23:53.476200 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.476211 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:53.476217 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:53.476297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:53.512531 2152780 cri.go:89] found id: ""
	I0819 13:23:53.512559 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.512567 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:53.512573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:53.512640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:53.562487 2152780 cri.go:89] found id: ""
	I0819 13:23:53.562524 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.562535 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:53.562543 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:53.562617 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:53.627507 2152780 cri.go:89] found id: ""
	I0819 13:23:53.627545 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.627556 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:53.627566 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:53.627587 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:53.686224 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:53.686270 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:53.700659 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:53.700693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:53.776506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:53.776544 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:53.776564 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:53.855375 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:53.855412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:53.305180 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.803826 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.962161 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:54.345531 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.845217 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.398105 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:56.411245 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:56.411312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:56.444097 2152780 cri.go:89] found id: ""
	I0819 13:23:56.444129 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.444138 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:56.444144 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:56.444214 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:56.480958 2152780 cri.go:89] found id: ""
	I0819 13:23:56.480993 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.481004 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:56.481013 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:56.481077 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:56.514657 2152780 cri.go:89] found id: ""
	I0819 13:23:56.514687 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.514696 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:56.514707 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:56.514762 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:56.548615 2152780 cri.go:89] found id: ""
	I0819 13:23:56.548647 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.548659 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:56.548667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:56.548735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:56.585801 2152780 cri.go:89] found id: ""
	I0819 13:23:56.585833 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.585844 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:56.585852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:56.585916 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:56.617804 2152780 cri.go:89] found id: ""
	I0819 13:23:56.617832 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.617843 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:56.617851 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:56.617919 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:56.655726 2152780 cri.go:89] found id: ""
	I0819 13:23:56.655753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.655761 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:56.655768 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:56.655820 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:56.691057 2152780 cri.go:89] found id: ""
	I0819 13:23:56.691087 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.691099 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:56.691111 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:56.691127 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:56.758836 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:56.758860 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:56.758879 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:56.836388 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:56.836427 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:56.875753 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:56.875781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:56.928126 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:56.928184 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.442721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:59.456581 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:59.456742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:59.492452 2152780 cri.go:89] found id: ""
	I0819 13:23:59.492480 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.492488 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:59.492494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:59.492544 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:59.527258 2152780 cri.go:89] found id: ""
	I0819 13:23:59.527291 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.527301 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:59.527311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:59.527366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:59.562712 2152780 cri.go:89] found id: ""
	I0819 13:23:59.562744 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.562757 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:59.562765 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:59.562836 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:59.597522 2152780 cri.go:89] found id: ""
	I0819 13:23:59.597557 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.597568 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:59.597577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:59.597649 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:59.638820 2152780 cri.go:89] found id: ""
	I0819 13:23:59.638852 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.638865 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:59.638873 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:59.638941 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:59.682623 2152780 cri.go:89] found id: ""
	I0819 13:23:59.682656 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.682667 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:59.682676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:59.682746 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:59.718731 2152780 cri.go:89] found id: ""
	I0819 13:23:59.718782 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.718795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:59.718808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:59.718881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:59.754799 2152780 cri.go:89] found id: ""
	I0819 13:23:59.754830 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.754840 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:59.754866 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:59.754890 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:59.807650 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:59.807683 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.821699 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:59.821727 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:59.901940 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:59.901962 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:59.901975 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:59.979019 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:59.979055 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:57.804198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:00.303807 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:57.960616 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.961190 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.344403 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:01.344800 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.519279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:02.532541 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:02.532606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:02.566287 2152780 cri.go:89] found id: ""
	I0819 13:24:02.566327 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.566338 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:02.566347 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:02.566423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:02.599874 2152780 cri.go:89] found id: ""
	I0819 13:24:02.599907 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.599920 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:02.599927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:02.600000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:02.633609 2152780 cri.go:89] found id: ""
	I0819 13:24:02.633642 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.633662 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:02.633671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:02.633741 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:02.669105 2152780 cri.go:89] found id: ""
	I0819 13:24:02.669133 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.669142 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:02.669151 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:02.669245 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:02.704719 2152780 cri.go:89] found id: ""
	I0819 13:24:02.704756 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.704769 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:02.704777 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:02.704846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:02.744694 2152780 cri.go:89] found id: ""
	I0819 13:24:02.744726 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.744736 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:02.744743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:02.744818 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.779911 2152780 cri.go:89] found id: ""
	I0819 13:24:02.779945 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.779959 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:02.779969 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:02.780034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:02.815235 2152780 cri.go:89] found id: ""
	I0819 13:24:02.815277 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.815293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:02.815303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:02.815319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:02.864822 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:02.864852 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:02.879626 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:02.879655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:02.951475 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:02.951504 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:02.951522 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:03.031154 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:03.031196 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.573907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:05.586737 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:05.586798 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:05.623912 2152780 cri.go:89] found id: ""
	I0819 13:24:05.623946 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.623957 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:05.623965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:05.624037 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:05.659399 2152780 cri.go:89] found id: ""
	I0819 13:24:05.659427 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.659435 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:05.659441 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:05.659494 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:05.693173 2152780 cri.go:89] found id: ""
	I0819 13:24:05.693207 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.693219 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:05.693228 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:05.693289 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:05.725641 2152780 cri.go:89] found id: ""
	I0819 13:24:05.725670 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.725680 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:05.725688 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:05.725752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:05.759559 2152780 cri.go:89] found id: ""
	I0819 13:24:05.759593 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.759606 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:05.759614 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:05.759686 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:05.793022 2152780 cri.go:89] found id: ""
	I0819 13:24:05.793051 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.793063 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:05.793071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:05.793147 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.304309 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.304491 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.461249 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.962079 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:03.344928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.845172 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.828997 2152780 cri.go:89] found id: ""
	I0819 13:24:05.829027 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.829037 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:05.829045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:05.829111 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:05.862852 2152780 cri.go:89] found id: ""
	I0819 13:24:05.862885 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.862895 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:05.862908 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:05.862924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.901295 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:05.901334 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:05.956106 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:05.956146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:05.971498 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:05.971525 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:06.040630 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:06.040653 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:06.040670 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:08.617393 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:08.631902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:08.631971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:08.669456 2152780 cri.go:89] found id: ""
	I0819 13:24:08.669512 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.669526 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:08.669539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:08.669630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:08.704825 2152780 cri.go:89] found id: ""
	I0819 13:24:08.704857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.704866 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:08.704874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:08.704933 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:08.738841 2152780 cri.go:89] found id: ""
	I0819 13:24:08.738879 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.738893 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:08.738902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:08.738961 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:08.775865 2152780 cri.go:89] found id: ""
	I0819 13:24:08.775899 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.775909 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:08.775918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:08.775981 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:08.813452 2152780 cri.go:89] found id: ""
	I0819 13:24:08.813482 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.813493 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:08.813501 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:08.813572 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:08.850638 2152780 cri.go:89] found id: ""
	I0819 13:24:08.850669 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.850681 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:08.850690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:08.850758 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:08.886750 2152780 cri.go:89] found id: ""
	I0819 13:24:08.886789 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.886801 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:08.886812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:08.886881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:08.922112 2152780 cri.go:89] found id: ""
	I0819 13:24:08.922145 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.922154 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:08.922164 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:08.922181 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:08.997014 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:08.997042 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:08.997059 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:09.074338 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:09.074388 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:09.114430 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:09.114460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:09.169581 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:09.169620 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:06.804761 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.809429 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:07.465846 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:09.961611 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.343646 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:10.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:12.844008 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.685453 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:11.698438 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:11.698509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:11.730814 2152780 cri.go:89] found id: ""
	I0819 13:24:11.730843 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.730852 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:11.730858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:11.730914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:11.768426 2152780 cri.go:89] found id: ""
	I0819 13:24:11.768460 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.768473 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:11.768483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:11.768555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:11.802718 2152780 cri.go:89] found id: ""
	I0819 13:24:11.802753 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.802765 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:11.802774 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:11.802841 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:11.838016 2152780 cri.go:89] found id: ""
	I0819 13:24:11.838058 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.838066 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:11.838073 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:11.838129 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:11.875869 2152780 cri.go:89] found id: ""
	I0819 13:24:11.875896 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.875904 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:11.875911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:11.875967 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:11.911538 2152780 cri.go:89] found id: ""
	I0819 13:24:11.911572 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.911580 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:11.911586 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:11.911640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:11.947762 2152780 cri.go:89] found id: ""
	I0819 13:24:11.947793 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.947803 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:11.947810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:11.947884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:11.989671 2152780 cri.go:89] found id: ""
	I0819 13:24:11.989706 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.989715 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:11.989726 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:11.989744 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:12.054707 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:12.054733 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:12.054749 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:12.137525 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:12.137572 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:12.175806 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:12.175834 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:12.227606 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:12.227651 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:14.743968 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:14.757178 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:14.757252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:14.791379 2152780 cri.go:89] found id: ""
	I0819 13:24:14.791409 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.791421 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:14.791430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:14.791510 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:14.826221 2152780 cri.go:89] found id: ""
	I0819 13:24:14.826250 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.826259 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:14.826265 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:14.826327 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:14.870239 2152780 cri.go:89] found id: ""
	I0819 13:24:14.870269 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.870281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:14.870345 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:14.870450 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:14.907473 2152780 cri.go:89] found id: ""
	I0819 13:24:14.907505 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.907515 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:14.907524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:14.907594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:14.942547 2152780 cri.go:89] found id: ""
	I0819 13:24:14.942585 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.942594 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:14.942600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:14.942658 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:14.982507 2152780 cri.go:89] found id: ""
	I0819 13:24:14.982536 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.982547 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:14.982555 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:14.982627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:15.020572 2152780 cri.go:89] found id: ""
	I0819 13:24:15.020604 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.020612 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:15.020619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:15.020691 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:15.057076 2152780 cri.go:89] found id: ""
	I0819 13:24:15.057105 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.057114 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:15.057123 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:15.057138 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:15.109111 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:15.109149 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:15.124049 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:15.124076 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:15.187413 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:15.187444 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:15.187460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:15.271009 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:15.271049 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:11.303722 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.804077 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:15.804497 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.961757 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.961949 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:14.844944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.344565 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.810205 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:17.823344 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:17.823417 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:17.862704 2152780 cri.go:89] found id: ""
	I0819 13:24:17.862734 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.862743 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:17.862750 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:17.862807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:17.900371 2152780 cri.go:89] found id: ""
	I0819 13:24:17.900404 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.900414 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:17.900420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:17.900476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:17.935518 2152780 cri.go:89] found id: ""
	I0819 13:24:17.935549 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.935559 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:17.935566 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:17.935622 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:17.972592 2152780 cri.go:89] found id: ""
	I0819 13:24:17.972624 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.972637 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:17.972647 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:17.972717 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:18.008636 2152780 cri.go:89] found id: ""
	I0819 13:24:18.008675 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.008686 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:18.008698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:18.008775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:18.040823 2152780 cri.go:89] found id: ""
	I0819 13:24:18.040857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.040866 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:18.040872 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:18.040934 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:18.079443 2152780 cri.go:89] found id: ""
	I0819 13:24:18.079472 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.079481 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:18.079488 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:18.079547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:18.118717 2152780 cri.go:89] found id: ""
	I0819 13:24:18.118746 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.118756 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:18.118766 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:18.118780 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:18.172588 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:18.172631 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:18.186427 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:18.186457 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:18.258534 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:18.258567 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:18.258585 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:18.335882 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:18.335922 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:17.804682 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.805544 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:16.461777 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:18.462005 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:20.960925 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.345299 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:21.843704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.844383 2152365 pod_ready.go:82] duration metric: took 4m0.006413479s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:22.844420 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 13:24:22.844431 2152365 pod_ready.go:39] duration metric: took 4m3.697261125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:22.844451 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:24:22.844496 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:22.844605 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:22.894620 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:22.894650 2152365 cri.go:89] found id: ""
	I0819 13:24:22.894661 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:22.894731 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.899300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:22.899365 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:22.934260 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:22.934285 2152365 cri.go:89] found id: ""
	I0819 13:24:22.934320 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:22.934380 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.939865 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:22.939927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:22.982433 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:22.982464 2152365 cri.go:89] found id: ""
	I0819 13:24:22.982475 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:22.982550 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.987038 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:22.987119 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:23.025231 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.025264 2152365 cri.go:89] found id: ""
	I0819 13:24:23.025275 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:23.025349 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.030567 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:23.030633 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:23.073663 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.073687 2152365 cri.go:89] found id: ""
	I0819 13:24:23.073695 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:23.073753 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.078563 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:23.078646 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:23.114597 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.114621 2152365 cri.go:89] found id: ""
	I0819 13:24:23.114629 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:23.114688 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.119022 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:23.119091 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:23.161263 2152365 cri.go:89] found id: ""
	I0819 13:24:23.161302 2152365 logs.go:276] 0 containers: []
	W0819 13:24:23.161311 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:23.161317 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:23.161373 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:23.211140 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.211167 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:23.211173 2152365 cri.go:89] found id: ""
	I0819 13:24:23.211182 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:23.211271 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.215875 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.220021 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:23.220046 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.283965 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:23.284004 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:20.878687 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:20.891965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:20.892042 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:20.926029 2152780 cri.go:89] found id: ""
	I0819 13:24:20.926057 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.926070 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:20.926077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:20.926142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:20.961025 2152780 cri.go:89] found id: ""
	I0819 13:24:20.961056 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.961067 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:20.961077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:20.961142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:20.993767 2152780 cri.go:89] found id: ""
	I0819 13:24:20.993800 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.993813 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:20.993821 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:20.993886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:21.025250 2152780 cri.go:89] found id: ""
	I0819 13:24:21.025285 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.025294 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:21.025301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:21.025356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:21.057832 2152780 cri.go:89] found id: ""
	I0819 13:24:21.057863 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.057871 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:21.057877 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:21.057930 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:21.092372 2152780 cri.go:89] found id: ""
	I0819 13:24:21.092400 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.092408 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:21.092419 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:21.092486 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:21.126138 2152780 cri.go:89] found id: ""
	I0819 13:24:21.126177 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.126187 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:21.126194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:21.126262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:21.157779 2152780 cri.go:89] found id: ""
	I0819 13:24:21.157806 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.157814 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:21.157823 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:21.157835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:21.240776 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:21.240813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:21.278705 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:21.278742 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:21.331086 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:21.331125 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:21.346399 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:21.346431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:21.424929 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:23.925325 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:23.938286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:23.938380 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:23.974038 2152780 cri.go:89] found id: ""
	I0819 13:24:23.974070 2152780 logs.go:276] 0 containers: []
	W0819 13:24:23.974079 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:23.974086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:23.974144 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:24.007828 2152780 cri.go:89] found id: ""
	I0819 13:24:24.007865 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.007876 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:24.007883 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:24.007951 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:24.042152 2152780 cri.go:89] found id: ""
	I0819 13:24:24.042182 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.042192 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:24.042198 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:24.042252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:24.076334 2152780 cri.go:89] found id: ""
	I0819 13:24:24.076366 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.076376 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:24.076382 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:24.076451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:24.116333 2152780 cri.go:89] found id: ""
	I0819 13:24:24.116368 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.116380 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:24.116388 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:24.116449 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:24.157126 2152780 cri.go:89] found id: ""
	I0819 13:24:24.157154 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.157163 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:24.157170 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:24.157224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:24.197124 2152780 cri.go:89] found id: ""
	I0819 13:24:24.197153 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.197161 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:24.197167 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:24.197224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:24.232662 2152780 cri.go:89] found id: ""
	I0819 13:24:24.232690 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.232698 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:24.232709 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:24.232721 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.310535 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.310573 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.351436 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:24.351469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:24.409022 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.409061 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.423392 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.423426 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:24.499638 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:22.305024 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:24.305228 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.961886 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:25.460596 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.000428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:27.016949 2152780 kubeadm.go:597] duration metric: took 4m2.883005064s to restartPrimaryControlPlane
	W0819 13:24:27.017029 2152780 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:27.017066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:27.478402 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:27.494351 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:24:27.505001 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:24:27.515444 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:24:27.515467 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:24:27.515523 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:24:27.524646 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:24:27.524697 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:24:27.535343 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:24:27.544402 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:24:27.544456 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:24:27.554655 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.564636 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:24:27.564709 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.575078 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:24:27.584856 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:24:27.584924 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:24:27.595099 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:24:27.675533 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:24:27.675638 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:24:27.827613 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:24:27.827770 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:24:27.827924 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:24:28.032127 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:24:23.323805 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:23.323834 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:23.400053 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:23.400107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:23.438155 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:23.438196 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.483737 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:23.483772 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:23.536000 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:23.536042 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.574211 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:23.574258 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.620141 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:23.620203 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.130515 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.130556 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.185924 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.185959 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.202306 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.202348 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:24.342635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:24.342675 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.895433 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:26.911884 2152365 api_server.go:72] duration metric: took 4m15.522363036s to wait for apiserver process to appear ...
	I0819 13:24:26.911914 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:24:26.911959 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:26.912026 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:26.949386 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.949423 2152365 cri.go:89] found id: ""
	I0819 13:24:26.949436 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:26.949509 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.954854 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:26.954937 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:26.993702 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:26.993732 2152365 cri.go:89] found id: ""
	I0819 13:24:26.993746 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:26.993806 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.997800 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:26.997873 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:27.044488 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:27.044521 2152365 cri.go:89] found id: ""
	I0819 13:24:27.044532 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:27.044612 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.049411 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:27.049488 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:27.098304 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.098328 2152365 cri.go:89] found id: ""
	I0819 13:24:27.098336 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:27.098388 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.102393 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:27.102448 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:27.138927 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.138952 2152365 cri.go:89] found id: ""
	I0819 13:24:27.138960 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:27.139019 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.143058 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:27.143113 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:27.183816 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:27.183846 2152365 cri.go:89] found id: ""
	I0819 13:24:27.183856 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:27.183918 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.188818 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:27.188880 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:27.231255 2152365 cri.go:89] found id: ""
	I0819 13:24:27.231286 2152365 logs.go:276] 0 containers: []
	W0819 13:24:27.231294 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:27.231300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:27.231352 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:27.265165 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.265189 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:27.265194 2152365 cri.go:89] found id: ""
	I0819 13:24:27.265204 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:27.265267 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.269294 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.273771 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:27.273796 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.315407 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:27.315446 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.359309 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:27.359343 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:27.817261 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:27.817316 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:27.834635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:27.834676 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:27.888137 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:27.888194 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:27.940367 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:27.940402 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.981404 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:27.981448 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:28.026666 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:28.026702 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:28.084387 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:28.084419 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:28.160615 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:28.160667 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:28.276547 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:28.276584 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:28.033928 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:24:28.034048 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:24:28.034129 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:24:28.034243 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:24:28.034320 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:24:28.034497 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:24:28.034852 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:24:28.035397 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:24:28.035736 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:24:28.036257 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:24:28.036850 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:24:28.037014 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:24:28.037102 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:24:28.386155 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:24:28.571457 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:24:28.835239 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:24:28.967421 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:24:28.990941 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:24:28.991082 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:24:28.991148 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:24:29.149979 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:24:29.151782 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:24:29.151943 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:24:29.161108 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:24:29.161209 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:24:29.161316 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:24:29.172129 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:24:26.804321 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.304110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.461339 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.462208 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:28.316651 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:28.316701 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:30.876679 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:24:30.882671 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:24:30.883817 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:24:30.883842 2152365 api_server.go:131] duration metric: took 3.971920414s to wait for apiserver health ...
	I0819 13:24:30.883850 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:24:30.883876 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:30.883927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:30.921225 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:30.921255 2152365 cri.go:89] found id: ""
	I0819 13:24:30.921266 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:30.921336 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.925961 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:30.926033 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:30.963834 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:30.963861 2152365 cri.go:89] found id: ""
	I0819 13:24:30.963872 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:30.963947 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.968020 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:30.968085 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:31.004690 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.004717 2152365 cri.go:89] found id: ""
	I0819 13:24:31.004726 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:31.004784 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.008770 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:31.008832 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:31.045412 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.045440 2152365 cri.go:89] found id: ""
	I0819 13:24:31.045451 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:31.045508 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.049682 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:31.049759 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:31.090853 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.090946 2152365 cri.go:89] found id: ""
	I0819 13:24:31.090958 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:31.091037 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.095245 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:31.095311 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:31.138661 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.138693 2152365 cri.go:89] found id: ""
	I0819 13:24:31.138705 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:31.138775 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.143216 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:31.143283 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:31.186554 2152365 cri.go:89] found id: ""
	I0819 13:24:31.186580 2152365 logs.go:276] 0 containers: []
	W0819 13:24:31.186589 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:31.186595 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:31.186645 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:31.226218 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.226257 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:31.226261 2152365 cri.go:89] found id: ""
	I0819 13:24:31.226270 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:31.226333 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.230602 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.234269 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:31.234290 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:31.299071 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:31.299107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.336594 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:31.336633 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:31.745390 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:31.745441 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:31.797446 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:31.797479 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.838882 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:31.838916 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.876830 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:31.876871 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.935426 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:31.935461 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.975182 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:31.975212 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:31.989631 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:31.989663 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:32.093154 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:32.093190 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:32.142869 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:32.142906 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:32.179008 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:32.179043 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:34.726569 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:24:34.726604 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.726609 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.726613 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.726619 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.726622 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.726625 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.726633 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.726636 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.726644 2152365 system_pods.go:74] duration metric: took 3.84278789s to wait for pod list to return data ...
	I0819 13:24:34.726654 2152365 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:24:34.729710 2152365 default_sa.go:45] found service account: "default"
	I0819 13:24:34.729745 2152365 default_sa.go:55] duration metric: took 3.082291ms for default service account to be created ...
	I0819 13:24:34.729758 2152365 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:24:34.734698 2152365 system_pods.go:86] 8 kube-system pods found
	I0819 13:24:34.734734 2152365 system_pods.go:89] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.734744 2152365 system_pods.go:89] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.734752 2152365 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.734758 2152365 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.734765 2152365 system_pods.go:89] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.734768 2152365 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.734775 2152365 system_pods.go:89] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.734784 2152365 system_pods.go:89] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.734792 2152365 system_pods.go:126] duration metric: took 5.027633ms to wait for k8s-apps to be running ...
	I0819 13:24:34.734802 2152365 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:24:34.734863 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:34.756925 2152365 system_svc.go:56] duration metric: took 22.110614ms WaitForService to wait for kubelet
	I0819 13:24:34.756964 2152365 kubeadm.go:582] duration metric: took 4m23.367451416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:24:34.757007 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:24:34.760465 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:24:34.760497 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:24:34.760510 2152365 node_conditions.go:105] duration metric: took 3.498566ms to run NodePressure ...
	I0819 13:24:34.760524 2152365 start.go:241] waiting for startup goroutines ...
	I0819 13:24:34.760531 2152365 start.go:246] waiting for cluster config update ...
	I0819 13:24:34.760552 2152365 start.go:255] writing updated cluster config ...
	I0819 13:24:34.760888 2152365 ssh_runner.go:195] Run: rm -f paused
	I0819 13:24:34.814139 2152365 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:24:34.815906 2152365 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-038572" cluster and "default" namespace by default
	I0819 13:24:31.305059 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.804408 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.804797 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:31.462264 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.961404 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.961719 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.305696 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.804500 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.460369 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.462138 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.804768 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.804803 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.462412 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.961368 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:46.804973 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.304762 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:47.460607 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.461589 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.804815 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:54.304854 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.961120 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:53.962098 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:55.962219 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:56.804112 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:57.298511 2153523 pod_ready.go:82] duration metric: took 4m0.000785617s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:57.298548 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:24:57.298569 2153523 pod_ready.go:39] duration metric: took 4m6.439346081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:57.298603 2153523 kubeadm.go:597] duration metric: took 4m16.103432347s to restartPrimaryControlPlane
	W0819 13:24:57.298676 2153523 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:57.298707 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:58.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:00.961514 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:03.460885 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:05.461736 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:09.173035 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:25:09.173672 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:09.173913 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:07.962080 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:10.461273 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:14.174468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:14.174714 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:12.961725 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:15.461361 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:17.461731 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:19.961772 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:23.413286 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.114546859s)
	I0819 13:25:23.413389 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:23.440190 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:25:23.450707 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:25:23.462984 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:25:23.463003 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:25:23.463046 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:25:23.472381 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:25:23.472437 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:25:23.482090 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:25:23.491789 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:25:23.491845 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:25:23.501722 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.510879 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:25:23.510936 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.520464 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:25:23.529582 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:25:23.529642 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:25:23.539089 2153523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:25:23.585240 2153523 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:25:23.585319 2153523 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:25:23.712867 2153523 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:25:23.712994 2153523 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:25:23.713118 2153523 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:25:23.722519 2153523 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:25:23.724354 2153523 out.go:235]   - Generating certificates and keys ...
	I0819 13:25:23.724436 2153523 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:25:23.724517 2153523 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:25:23.724598 2153523 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:25:23.724648 2153523 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:25:23.724702 2153523 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:25:23.724745 2153523 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:25:23.724795 2153523 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:25:23.724847 2153523 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:25:23.724904 2153523 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:25:23.725051 2153523 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:25:23.725125 2153523 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:25:23.725238 2153523 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:25:23.779977 2153523 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:25:23.930648 2153523 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:25:24.124057 2153523 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:25:24.314884 2153523 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:25:24.494351 2153523 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:25:24.495024 2153523 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:25:24.497778 2153523 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:25:24.175179 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:24.175432 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:24.499543 2153523 out.go:235]   - Booting up control plane ...
	I0819 13:25:24.499671 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:25:24.499762 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:25:24.499864 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:25:24.520379 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:25:24.532777 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:25:24.532853 2153523 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:25:24.666433 2153523 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:25:24.666580 2153523 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:25:25.168265 2153523 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.973317ms
	I0819 13:25:25.168373 2153523 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:25:21.961823 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:24.462014 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:30.169290 2153523 kubeadm.go:310] [api-check] The API server is healthy after 5.001301539s
	I0819 13:25:30.182039 2153523 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:25:30.198499 2153523 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:25:30.224273 2153523 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:25:30.224513 2153523 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-823470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:25:30.236397 2153523 kubeadm.go:310] [bootstrap-token] Using token: uoaia1.lo1g4dbd1uwo2t3t
	I0819 13:25:30.237712 2153523 out.go:235]   - Configuring RBAC rules ...
	I0819 13:25:30.237846 2153523 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:25:30.245121 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:25:30.254416 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:25:30.257701 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:25:30.262104 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:25:30.266592 2153523 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:25:30.577306 2153523 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:25:26.961302 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:28.962130 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:31.051204 2153523 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:25:31.576457 2153523 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:25:31.577557 2153523 kubeadm.go:310] 
	I0819 13:25:31.577644 2153523 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:25:31.577653 2153523 kubeadm.go:310] 
	I0819 13:25:31.577728 2153523 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:25:31.577738 2153523 kubeadm.go:310] 
	I0819 13:25:31.577786 2153523 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:25:31.577872 2153523 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:25:31.577949 2153523 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:25:31.577959 2153523 kubeadm.go:310] 
	I0819 13:25:31.578041 2153523 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:25:31.578051 2153523 kubeadm.go:310] 
	I0819 13:25:31.578111 2153523 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:25:31.578120 2153523 kubeadm.go:310] 
	I0819 13:25:31.578174 2153523 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:25:31.578266 2153523 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:25:31.578354 2153523 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:25:31.578383 2153523 kubeadm.go:310] 
	I0819 13:25:31.578524 2153523 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:25:31.578624 2153523 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:25:31.578641 2153523 kubeadm.go:310] 
	I0819 13:25:31.578765 2153523 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.578886 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:25:31.578922 2153523 kubeadm.go:310] 	--control-plane 
	I0819 13:25:31.578931 2153523 kubeadm.go:310] 
	I0819 13:25:31.579045 2153523 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:25:31.579055 2153523 kubeadm.go:310] 
	I0819 13:25:31.579151 2153523 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.579277 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:25:31.580242 2153523 kubeadm.go:310] W0819 13:25:23.534803    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580611 2153523 kubeadm.go:310] W0819 13:25:23.535422    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580758 2153523 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:25:31.580793 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:25:31.580803 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:25:31.582322 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:25:31.583520 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:25:31.594967 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:25:31.613491 2153523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:25:31.613590 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-823470 minikube.k8s.io/updated_at=2024_08_19T13_25_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=embed-certs-823470 minikube.k8s.io/primary=true
	I0819 13:25:31.613591 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.644190 2153523 ops.go:34] apiserver oom_adj: -16
	I0819 13:25:31.824270 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.324994 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.824955 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.325186 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.824797 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.325222 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.824306 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.325248 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.825154 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.461327 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:33.461401 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:35.455588 2151869 pod_ready.go:82] duration metric: took 4m0.000741427s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	E0819 13:25:35.455630 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:25:35.455654 2151869 pod_ready.go:39] duration metric: took 4m14.499879559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:35.455689 2151869 kubeadm.go:597] duration metric: took 4m22.29129413s to restartPrimaryControlPlane
	W0819 13:25:35.455763 2151869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:25:35.455798 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:25:36.324463 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:36.464343 2153523 kubeadm.go:1113] duration metric: took 4.850819101s to wait for elevateKubeSystemPrivileges
	I0819 13:25:36.464399 2153523 kubeadm.go:394] duration metric: took 4m55.327792066s to StartCluster
	I0819 13:25:36.464441 2153523 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.464553 2153523 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:25:36.466407 2153523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.466694 2153523 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:25:36.466786 2153523 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:25:36.466883 2153523 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-823470"
	I0819 13:25:36.466903 2153523 addons.go:69] Setting default-storageclass=true in profile "embed-certs-823470"
	I0819 13:25:36.466908 2153523 addons.go:69] Setting metrics-server=true in profile "embed-certs-823470"
	I0819 13:25:36.466924 2153523 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-823470"
	W0819 13:25:36.466935 2153523 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:25:36.466943 2153523 addons.go:234] Setting addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:36.466949 2153523 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-823470"
	W0819 13:25:36.466961 2153523 addons.go:243] addon metrics-server should already be in state true
	I0819 13:25:36.466968 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:25:36.466987 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.466998 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.467388 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467421 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467422 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467424 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.468449 2153523 out.go:177] * Verifying Kubernetes components...
	I0819 13:25:36.469946 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:25:36.484726 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0819 13:25:36.484740 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I0819 13:25:36.484824 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0819 13:25:36.485224 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485280 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485303 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485822 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485841 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485824 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485881 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485956 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485982 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.486206 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486260 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486267 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486382 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.486787 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486803 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486819 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.486836 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.490147 2153523 addons.go:234] Setting addon default-storageclass=true in "embed-certs-823470"
	W0819 13:25:36.490169 2153523 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:25:36.490198 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.490599 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.490628 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.506245 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
	I0819 13:25:36.506573 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0819 13:25:36.506955 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507024 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507467 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507485 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507599 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507619 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507896 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.507899 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.508055 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.508657 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.508682 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.509546 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.511452 2153523 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:25:36.512660 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:25:36.512680 2153523 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:25:36.512702 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.515934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516401 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.516422 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516608 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.516810 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.517007 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.517160 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.517459 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0819 13:25:36.517826 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.518316 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.518339 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.518689 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.518859 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.520336 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.521729 2153523 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:25:36.522775 2153523 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.522793 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:25:36.522813 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.525839 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526467 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.526505 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526736 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.527191 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.527384 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.527540 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.527918 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0819 13:25:36.528354 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.528868 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.528890 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.529243 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.529418 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.530881 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.531125 2153523 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.531138 2153523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:25:36.531151 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.534168 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534579 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.534603 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534703 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.534855 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.534995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.535105 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.723562 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:25:36.740697 2153523 node_ready.go:35] waiting up to 6m0s for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749268 2153523 node_ready.go:49] node "embed-certs-823470" has status "Ready":"True"
	I0819 13:25:36.749294 2153523 node_ready.go:38] duration metric: took 8.560448ms for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749306 2153523 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:36.756208 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:36.829190 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.845456 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:25:36.845487 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:25:36.857828 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.928557 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:25:36.928590 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:25:36.979832 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:36.979864 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:25:37.086105 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:37.498852 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.498909 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498924 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499189 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499210 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499218 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499225 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499393 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499392 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499405 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499419 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499426 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499762 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499774 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499771 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499761 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499815 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499835 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510580 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.510601 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.510894 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.510922 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510924 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.721765 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.721791 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722119 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722143 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722171 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.722180 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722502 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722513 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722525 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722538 2153523 addons.go:475] Verifying addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:37.724029 2153523 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:25:37.725177 2153523 addons.go:510] duration metric: took 1.258399079s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:25:38.763132 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:40.765558 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:44.175962 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:44.176268 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:43.263762 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:45.765551 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:46.265160 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.265189 2153523 pod_ready.go:82] duration metric: took 9.50894907s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.265202 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270706 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.270733 2153523 pod_ready.go:82] duration metric: took 5.521974ms for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270745 2153523 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275061 2153523 pod_ready.go:93] pod "etcd-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.275095 2153523 pod_ready.go:82] duration metric: took 4.331701ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275106 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280251 2153523 pod_ready.go:93] pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.280269 2153523 pod_ready.go:82] duration metric: took 5.157209ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280278 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284437 2153523 pod_ready.go:93] pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.284456 2153523 pod_ready.go:82] duration metric: took 4.173028ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284464 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660549 2153523 pod_ready.go:93] pod "kube-proxy-cn9sl" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.660577 2153523 pod_ready.go:82] duration metric: took 376.106459ms for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660588 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060405 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:47.060432 2153523 pod_ready.go:82] duration metric: took 399.837833ms for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060440 2153523 pod_ready.go:39] duration metric: took 10.311121311s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:47.060458 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:25:47.060513 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:25:47.076832 2153523 api_server.go:72] duration metric: took 10.610087948s to wait for apiserver process to appear ...
	I0819 13:25:47.076869 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:25:47.076894 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:25:47.081037 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:25:47.081960 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:25:47.081984 2153523 api_server.go:131] duration metric: took 5.106291ms to wait for apiserver health ...
	I0819 13:25:47.081993 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:25:47.264787 2153523 system_pods.go:59] 9 kube-system pods found
	I0819 13:25:47.264817 2153523 system_pods.go:61] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.264822 2153523 system_pods.go:61] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.264826 2153523 system_pods.go:61] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.264830 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.264835 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.264839 2153523 system_pods.go:61] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.264842 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.264849 2153523 system_pods.go:61] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.264853 2153523 system_pods.go:61] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.264863 2153523 system_pods.go:74] duration metric: took 182.863608ms to wait for pod list to return data ...
	I0819 13:25:47.264870 2153523 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:25:47.461938 2153523 default_sa.go:45] found service account: "default"
	I0819 13:25:47.461968 2153523 default_sa.go:55] duration metric: took 197.091755ms for default service account to be created ...
	I0819 13:25:47.461978 2153523 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:25:47.664070 2153523 system_pods.go:86] 9 kube-system pods found
	I0819 13:25:47.664101 2153523 system_pods.go:89] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.664106 2153523 system_pods.go:89] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.664110 2153523 system_pods.go:89] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.664115 2153523 system_pods.go:89] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.664119 2153523 system_pods.go:89] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.664122 2153523 system_pods.go:89] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.664125 2153523 system_pods.go:89] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.664131 2153523 system_pods.go:89] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.664135 2153523 system_pods.go:89] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.664146 2153523 system_pods.go:126] duration metric: took 202.162032ms to wait for k8s-apps to be running ...
	I0819 13:25:47.664157 2153523 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:25:47.664259 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:47.680000 2153523 system_svc.go:56] duration metric: took 15.832426ms WaitForService to wait for kubelet
	I0819 13:25:47.680036 2153523 kubeadm.go:582] duration metric: took 11.213303025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:25:47.680063 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:25:47.861730 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:25:47.861762 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:25:47.861774 2153523 node_conditions.go:105] duration metric: took 181.706418ms to run NodePressure ...
	I0819 13:25:47.861786 2153523 start.go:241] waiting for startup goroutines ...
	I0819 13:25:47.861793 2153523 start.go:246] waiting for cluster config update ...
	I0819 13:25:47.861804 2153523 start.go:255] writing updated cluster config ...
	I0819 13:25:47.862066 2153523 ssh_runner.go:195] Run: rm -f paused
	I0819 13:25:47.913161 2153523 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:25:47.915029 2153523 out.go:177] * Done! kubectl is now configured to use "embed-certs-823470" cluster and "default" namespace by default
	I0819 13:26:01.591066 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.135241188s)
	I0819 13:26:01.591140 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:01.620641 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:26:01.633612 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:01.647057 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:01.647085 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:01.647144 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:01.663839 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:01.663903 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:01.681570 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:01.692967 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:01.693038 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:01.714743 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.723974 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:01.724044 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.734044 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:01.743276 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:01.743326 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:01.752586 2151869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:01.798481 2151869 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:26:01.798564 2151869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:01.909646 2151869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:01.909801 2151869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:01.909923 2151869 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:26:01.923525 2151869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:01.925556 2151869 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:01.925670 2151869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:01.925760 2151869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:01.925900 2151869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:01.925991 2151869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:01.926102 2151869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:01.926162 2151869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:01.926248 2151869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:01.926334 2151869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:01.926449 2151869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:01.926567 2151869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:01.926632 2151869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:01.926707 2151869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:02.049009 2151869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:02.310249 2151869 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:26:02.425638 2151869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:02.664112 2151869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:03.165747 2151869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:03.166413 2151869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:03.169075 2151869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:03.170768 2151869 out.go:235]   - Booting up control plane ...
	I0819 13:26:03.170893 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:03.171003 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:03.171184 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:03.190081 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:03.197988 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:03.198058 2151869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:03.327052 2151869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:26:03.327180 2151869 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:26:04.332623 2151869 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002268296s
	I0819 13:26:04.332744 2151869 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:26:09.332759 2151869 kubeadm.go:310] [api-check] The API server is healthy after 5.001673024s
	I0819 13:26:09.344645 2151869 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:26:09.359806 2151869 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:26:09.393264 2151869 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:26:09.393499 2151869 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-441627 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:26:09.413036 2151869 kubeadm.go:310] [bootstrap-token] Using token: cva6zd.b3089a5fajgqfd6r
	I0819 13:26:09.414921 2151869 out.go:235]   - Configuring RBAC rules ...
	I0819 13:26:09.415128 2151869 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:26:09.420470 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:26:09.431168 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:26:09.438188 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:26:09.443406 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:26:09.450497 2151869 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:26:09.737815 2151869 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:26:10.165062 2151869 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:26:10.738017 2151869 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:26:10.739202 2151869 kubeadm.go:310] 
	I0819 13:26:10.739309 2151869 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:26:10.739321 2151869 kubeadm.go:310] 
	I0819 13:26:10.739415 2151869 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:26:10.739426 2151869 kubeadm.go:310] 
	I0819 13:26:10.739469 2151869 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:26:10.739558 2151869 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:26:10.739645 2151869 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:26:10.739662 2151869 kubeadm.go:310] 
	I0819 13:26:10.739746 2151869 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:26:10.739758 2151869 kubeadm.go:310] 
	I0819 13:26:10.739833 2151869 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:26:10.739842 2151869 kubeadm.go:310] 
	I0819 13:26:10.739926 2151869 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:26:10.740039 2151869 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:26:10.740116 2151869 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:26:10.740134 2151869 kubeadm.go:310] 
	I0819 13:26:10.740272 2151869 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:26:10.740376 2151869 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:26:10.740387 2151869 kubeadm.go:310] 
	I0819 13:26:10.740487 2151869 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.740651 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:26:10.740689 2151869 kubeadm.go:310] 	--control-plane 
	I0819 13:26:10.740699 2151869 kubeadm.go:310] 
	I0819 13:26:10.740808 2151869 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:26:10.740822 2151869 kubeadm.go:310] 
	I0819 13:26:10.740931 2151869 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.741086 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:26:10.743032 2151869 kubeadm.go:310] W0819 13:26:01.774566    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743353 2151869 kubeadm.go:310] W0819 13:26:01.775291    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743487 2151869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:10.743516 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:26:10.743530 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:26:10.745175 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:26:10.746234 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:26:10.756423 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:26:10.774266 2151869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:26:10.774332 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:10.774349 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-441627 minikube.k8s.io/updated_at=2024_08_19T13_26_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=no-preload-441627 minikube.k8s.io/primary=true
	I0819 13:26:10.800516 2151869 ops.go:34] apiserver oom_adj: -16
	I0819 13:26:10.991369 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.492114 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.992301 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.491433 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.991812 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.492015 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.991704 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.492129 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.992304 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:15.079535 2151869 kubeadm.go:1113] duration metric: took 4.305270284s to wait for elevateKubeSystemPrivileges
	I0819 13:26:15.079607 2151869 kubeadm.go:394] duration metric: took 5m1.9669805s to StartCluster
	I0819 13:26:15.079635 2151869 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.079759 2151869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:26:15.081957 2151869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.082240 2151869 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:26:15.082282 2151869 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:26:15.082374 2151869 addons.go:69] Setting storage-provisioner=true in profile "no-preload-441627"
	I0819 13:26:15.082413 2151869 addons.go:234] Setting addon storage-provisioner=true in "no-preload-441627"
	I0819 13:26:15.082409 2151869 addons.go:69] Setting metrics-server=true in profile "no-preload-441627"
	W0819 13:26:15.082425 2151869 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:26:15.082389 2151869 addons.go:69] Setting default-storageclass=true in profile "no-preload-441627"
	I0819 13:26:15.082449 2151869 addons.go:234] Setting addon metrics-server=true in "no-preload-441627"
	I0819 13:26:15.082461 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	W0819 13:26:15.082463 2151869 addons.go:243] addon metrics-server should already be in state true
	I0819 13:26:15.082479 2151869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-441627"
	I0819 13:26:15.082503 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.082514 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082869 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082920 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082925 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082947 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.083691 2151869 out.go:177] * Verifying Kubernetes components...
	I0819 13:26:15.085493 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:26:15.101376 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
	I0819 13:26:15.102091 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.102685 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.102703 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.103169 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.103181 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0819 13:26:15.103193 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0819 13:26:15.103361 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.103663 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.103732 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.104300 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104322 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.104710 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104730 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.105081 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105409 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105905 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.105938 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.106084 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.106119 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.107248 2151869 addons.go:234] Setting addon default-storageclass=true in "no-preload-441627"
	W0819 13:26:15.107271 2151869 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:26:15.107302 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.107654 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.107713 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.122709 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0819 13:26:15.123128 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.123869 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.123898 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.124416 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.124634 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.126217 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I0819 13:26:15.126791 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.127400 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.127419 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.127698 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.128022 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.128336 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.129232 2151869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:26:15.130058 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.130226 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:26:15.130243 2151869 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:26:15.130262 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.131335 2151869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:26:15.131652 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45447
	I0819 13:26:15.132227 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.132570 2151869 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.132586 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:26:15.132605 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.133517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134218 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.134244 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134600 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.134738 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.134755 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.134852 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.135030 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.135097 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.135284 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.135673 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.135702 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.136748 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137177 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.137197 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137442 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.137613 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.137765 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.137924 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.154937 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0819 13:26:15.155407 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.155835 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.155847 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.156238 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.156416 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.158575 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.158795 2151869 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.158805 2151869 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:26:15.158818 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.162078 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162519 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.162534 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162825 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.162977 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.163114 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.163198 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.309062 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:26:15.341525 2151869 node_ready.go:35] waiting up to 6m0s for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354270 2151869 node_ready.go:49] node "no-preload-441627" has status "Ready":"True"
	I0819 13:26:15.354307 2151869 node_ready.go:38] duration metric: took 12.742932ms for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354320 2151869 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:15.363385 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:15.427138 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.429228 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.517567 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:26:15.517597 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:26:15.626356 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:26:15.626393 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:26:15.677045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677072 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677414 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677471 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.677483 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677490 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677729 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677749 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.683924 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.683943 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.684224 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.684249 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.684252 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:15.701949 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:15.701980 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:26:15.743920 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:16.110007 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110035 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110393 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110415 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.110426 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110439 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110691 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110712 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426076 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426617 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.426639 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426650 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426660 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426669 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426948 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426994 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.427011 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.427024 2151869 addons.go:475] Verifying addon metrics-server=true in "no-preload-441627"
	I0819 13:26:16.428916 2151869 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 13:26:16.430020 2151869 addons.go:510] duration metric: took 1.347734507s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 13:26:17.369079 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:19.369662 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:21.369721 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:23.368863 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.368886 2151869 pod_ready.go:82] duration metric: took 8.005467546s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.368895 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373851 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.373873 2151869 pod_ready.go:82] duration metric: took 4.971429ms for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373884 2151869 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378381 2151869 pod_ready.go:93] pod "etcd-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.378405 2151869 pod_ready.go:82] duration metric: took 4.513601ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378417 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382700 2151869 pod_ready.go:93] pod "kube-apiserver-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.382719 2151869 pod_ready.go:82] duration metric: took 4.295583ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382728 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387145 2151869 pod_ready.go:93] pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.387168 2151869 pod_ready.go:82] duration metric: took 4.434097ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387179 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768694 2151869 pod_ready.go:93] pod "kube-proxy-9jhss" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.768722 2151869 pod_ready.go:82] duration metric: took 381.534902ms for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768735 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167861 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:24.167888 2151869 pod_ready.go:82] duration metric: took 399.144976ms for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167896 2151869 pod_ready.go:39] duration metric: took 8.81356327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:24.167913 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:26:24.167967 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:26:24.187913 2151869 api_server.go:72] duration metric: took 9.105632249s to wait for apiserver process to appear ...
	I0819 13:26:24.187938 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:26:24.187960 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:26:24.192758 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:26:24.193797 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:26:24.193822 2151869 api_server.go:131] duration metric: took 5.87693ms to wait for apiserver health ...
	I0819 13:26:24.193833 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:26:24.371261 2151869 system_pods.go:59] 9 kube-system pods found
	I0819 13:26:24.371291 2151869 system_pods.go:61] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.371296 2151869 system_pods.go:61] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.371300 2151869 system_pods.go:61] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.371304 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.371308 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.371312 2151869 system_pods.go:61] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.371317 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.371325 2151869 system_pods.go:61] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.371331 2151869 system_pods.go:61] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.371342 2151869 system_pods.go:74] duration metric: took 177.503046ms to wait for pod list to return data ...
	I0819 13:26:24.371355 2151869 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:26:24.568039 2151869 default_sa.go:45] found service account: "default"
	I0819 13:26:24.568068 2151869 default_sa.go:55] duration metric: took 196.702869ms for default service account to be created ...
	I0819 13:26:24.568078 2151869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:26:24.771705 2151869 system_pods.go:86] 9 kube-system pods found
	I0819 13:26:24.771734 2151869 system_pods.go:89] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.771741 2151869 system_pods.go:89] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.771745 2151869 system_pods.go:89] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.771749 2151869 system_pods.go:89] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.771753 2151869 system_pods.go:89] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.771756 2151869 system_pods.go:89] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.771759 2151869 system_pods.go:89] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.771765 2151869 system_pods.go:89] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.771770 2151869 system_pods.go:89] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.771781 2151869 system_pods.go:126] duration metric: took 203.697331ms to wait for k8s-apps to be running ...
	I0819 13:26:24.771794 2151869 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:26:24.771837 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.788088 2151869 system_svc.go:56] duration metric: took 16.285619ms WaitForService to wait for kubelet
	I0819 13:26:24.788122 2151869 kubeadm.go:582] duration metric: took 9.705841529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:26:24.788157 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:26:24.968341 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:26:24.968377 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:26:24.968392 2151869 node_conditions.go:105] duration metric: took 180.211418ms to run NodePressure ...
	I0819 13:26:24.968409 2151869 start.go:241] waiting for startup goroutines ...
	I0819 13:26:24.968418 2151869 start.go:246] waiting for cluster config update ...
	I0819 13:26:24.968429 2151869 start.go:255] writing updated cluster config ...
	I0819 13:26:24.968747 2151869 ssh_runner.go:195] Run: rm -f paused
	I0819 13:26:25.025595 2151869 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:26:25.027536 2151869 out.go:177] * Done! kubectl is now configured to use "no-preload-441627" cluster and "default" namespace by default
	I0819 13:26:24.177907 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:26:24.178201 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:26:24.178214 2152780 kubeadm.go:310] 
	I0819 13:26:24.178291 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:26:24.178372 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:26:24.178382 2152780 kubeadm.go:310] 
	I0819 13:26:24.178424 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:26:24.178476 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:26:24.178656 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:26:24.178674 2152780 kubeadm.go:310] 
	I0819 13:26:24.178792 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:26:24.178834 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:26:24.178879 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:26:24.178885 2152780 kubeadm.go:310] 
	I0819 13:26:24.179039 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:26:24.179150 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:26:24.179160 2152780 kubeadm.go:310] 
	I0819 13:26:24.179343 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:26:24.179495 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:26:24.179612 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:26:24.179737 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:26:24.179753 2152780 kubeadm.go:310] 
	I0819 13:26:24.180394 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:24.180518 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:26:24.180601 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 13:26:24.180756 2152780 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 13:26:24.180807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:26:24.684112 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.697957 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:24.707579 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:24.707600 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:24.707649 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:24.717570 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:24.717642 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:24.727069 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:24.735692 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:24.735757 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:24.744522 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.753036 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:24.753090 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.763455 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:24.772780 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:24.772839 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:24.782898 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:24.856625 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:26:24.856732 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:25.003965 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:25.004118 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:25.004294 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:26:25.205556 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:25.207347 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:25.207462 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:25.207553 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:25.207671 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:25.207759 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:25.207840 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:25.207910 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:25.207978 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:25.208063 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:25.208526 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:25.209357 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:25.209668 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:25.209816 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:25.424000 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:25.667776 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:25.723895 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:25.817349 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:25.834327 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:25.836860 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:25.837016 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:25.992443 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:25.994085 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:26:25.994223 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:26.007880 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:26.009553 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:26.013310 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:26.014264 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:27:06.016907 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:27:06.017042 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:06.017274 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:11.017468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:11.017758 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:21.018328 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:21.018585 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:41.019394 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:41.019632 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018489 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:28:21.018784 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018807 2152780 kubeadm.go:310] 
	I0819 13:28:21.018879 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:28:21.018963 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:28:21.018971 2152780 kubeadm.go:310] 
	I0819 13:28:21.019000 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:28:21.019030 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:28:21.019122 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:28:21.019130 2152780 kubeadm.go:310] 
	I0819 13:28:21.019233 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:28:21.019276 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:28:21.019329 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:28:21.019337 2152780 kubeadm.go:310] 
	I0819 13:28:21.019471 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:28:21.019607 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:28:21.019620 2152780 kubeadm.go:310] 
	I0819 13:28:21.019776 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:28:21.019895 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:28:21.020016 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:28:21.020125 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:28:21.020139 2152780 kubeadm.go:310] 
	I0819 13:28:21.020943 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:28:21.021058 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:28:21.021158 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 13:28:21.021292 2152780 kubeadm.go:394] duration metric: took 7m56.942541209s to StartCluster
	I0819 13:28:21.021375 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:28:21.021456 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:28:21.068572 2152780 cri.go:89] found id: ""
	I0819 13:28:21.068613 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.068624 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:28:21.068633 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:28:21.068704 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:28:21.108484 2152780 cri.go:89] found id: ""
	I0819 13:28:21.108514 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.108532 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:28:21.108540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:28:21.108611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:28:21.151863 2152780 cri.go:89] found id: ""
	I0819 13:28:21.151894 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.151905 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:28:21.151914 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:28:21.151992 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:28:21.191844 2152780 cri.go:89] found id: ""
	I0819 13:28:21.191873 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.191883 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:28:21.191891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:28:21.191970 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:28:21.230545 2152780 cri.go:89] found id: ""
	I0819 13:28:21.230580 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.230588 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:28:21.230595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:28:21.230650 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:28:21.264157 2152780 cri.go:89] found id: ""
	I0819 13:28:21.264211 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.264225 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:28:21.264233 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:28:21.264301 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:28:21.297523 2152780 cri.go:89] found id: ""
	I0819 13:28:21.297558 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.297566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:28:21.297573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:28:21.297630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:28:21.336765 2152780 cri.go:89] found id: ""
	I0819 13:28:21.336793 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.336802 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:28:21.336815 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:28:21.336839 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:28:21.387668 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:28:21.387711 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:28:21.402131 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:28:21.402173 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:28:21.473587 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:28:21.473615 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:28:21.473632 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:28:21.576567 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:28:21.576610 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 13:28:21.614859 2152780 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 13:28:21.614919 2152780 out.go:270] * 
	W0819 13:28:21.615011 2152780 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.615032 2152780 out.go:270] * 
	W0819 13:28:21.615837 2152780 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:28:21.619494 2152780 out.go:201] 
	W0819 13:28:21.620743 2152780 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.620793 2152780 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 13:28:21.620813 2152780 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 13:28:21.622143 2152780 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.212344156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724075020212324751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b51e560-850d-4ad4-a047-2f0671c51125 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.212895051Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb3acdc2-bbce-4ebb-9bc2-47b18a7340fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.212948947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb3acdc2-bbce-4ebb-9bc2-47b18a7340fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.213136988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce37bfb8cdce0afffb49c149eb226f5c4766eeeec26f1da39937debefc27e17e,PodSandboxId:39a67d1ffda30ecf0b26779c9b79172e7e17c42d10ce5e9d78fa1b9d31610db8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938447971748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95hnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3600e8be-6152-427e-b049-52ff758d6b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3798b17b161f5b1af75c78352d94e35fbbbdceefffe887465e356851caab671b,PodSandboxId:e24249efddc1dbb0167bf3189e04aada650ffc25ab11b95e6cb7e98a15f80746,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938392591354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nwzh6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 514f2cfd-88a1-4a1f-be53-4c21a33e9f3b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b117aabf2cf090344b6d163058bf9a4f7af5a7011f40e3b2c84dfc72c1ebc1d,PodSandboxId:5604ce01b5e698f635e75403302a0446327cf787e862172e920175ec7f1b5cac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1724073937863833017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:649e9a31e05f140e89652fac3586255fe09d21be2cd1a7692a37b54947537100,PodSandboxId:0897f86c472a4bf5c6035de3bb2ba7193c8fd026f3580da98f7220a4b10af1d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724073936334375499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cn9sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d8735d-9726-4b29-9e25-ff427831e523,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807d943675d96e3c512c003d16f2f826569a36466eb14842c6d3494849313fe5,PodSandboxId:3dac58c1da5f417f9439f94271c16e1179831244f6bf9e8e81b063e2125aad9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073925666149723,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313639c8d25b086f943d9287cedb068f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa09432aecb2a2b37ba05dad4d5808366acafac98896d466c5aafe664990129,PodSandboxId:e6eba311f6f86419b386650a1c324897dc03af1d2c55874267633967e55e525e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073925612013236,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e1b583833ac0f159fedcfa268e99a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864097e3781d0c4a5b1508c65507fa7c3d87fa7293966bcffd9047cacb6c8e7b,PodSandboxId:1fb5c394291b63d991029604e10f2b4c9875a763474ff727d72b515c52bd782c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073925609198124,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9892b9079966050e46e07583998bab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fbc22569fbb35ce9edabdaebe2e2635bd4621399a05e98d1522c0fb6ef442b,PodSandboxId:efb591bb60aa98ccf57f182759a360cdf56b2c6a8448451d45e6f00a0a2c25b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073925573871823,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19477ce7f6a1b08763711c8683d785c97a54166454d06388f00ef69316ef70d3,PodSandboxId:807d476f2ae267751c2eaad333e4c2e8339f86022aa607d5a728856b76cd7616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073643641761945,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb3acdc2-bbce-4ebb-9bc2-47b18a7340fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.251148361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=163340ef-93cd-4103-a3b0-8145c772b916 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.251240221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=163340ef-93cd-4103-a3b0-8145c772b916 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.252301348Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a73e59bb-55af-4568-b95f-4424fabfe1b7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.252907842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724075020252882515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a73e59bb-55af-4568-b95f-4424fabfe1b7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.254293855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fed2fc1-a241-4542-b83d-16093e1981d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.254349362Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fed2fc1-a241-4542-b83d-16093e1981d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.254558884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce37bfb8cdce0afffb49c149eb226f5c4766eeeec26f1da39937debefc27e17e,PodSandboxId:39a67d1ffda30ecf0b26779c9b79172e7e17c42d10ce5e9d78fa1b9d31610db8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938447971748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95hnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3600e8be-6152-427e-b049-52ff758d6b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3798b17b161f5b1af75c78352d94e35fbbbdceefffe887465e356851caab671b,PodSandboxId:e24249efddc1dbb0167bf3189e04aada650ffc25ab11b95e6cb7e98a15f80746,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938392591354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nwzh6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 514f2cfd-88a1-4a1f-be53-4c21a33e9f3b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b117aabf2cf090344b6d163058bf9a4f7af5a7011f40e3b2c84dfc72c1ebc1d,PodSandboxId:5604ce01b5e698f635e75403302a0446327cf787e862172e920175ec7f1b5cac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1724073937863833017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:649e9a31e05f140e89652fac3586255fe09d21be2cd1a7692a37b54947537100,PodSandboxId:0897f86c472a4bf5c6035de3bb2ba7193c8fd026f3580da98f7220a4b10af1d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724073936334375499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cn9sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d8735d-9726-4b29-9e25-ff427831e523,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807d943675d96e3c512c003d16f2f826569a36466eb14842c6d3494849313fe5,PodSandboxId:3dac58c1da5f417f9439f94271c16e1179831244f6bf9e8e81b063e2125aad9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073925666149723,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313639c8d25b086f943d9287cedb068f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa09432aecb2a2b37ba05dad4d5808366acafac98896d466c5aafe664990129,PodSandboxId:e6eba311f6f86419b386650a1c324897dc03af1d2c55874267633967e55e525e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073925612013236,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e1b583833ac0f159fedcfa268e99a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864097e3781d0c4a5b1508c65507fa7c3d87fa7293966bcffd9047cacb6c8e7b,PodSandboxId:1fb5c394291b63d991029604e10f2b4c9875a763474ff727d72b515c52bd782c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073925609198124,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9892b9079966050e46e07583998bab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fbc22569fbb35ce9edabdaebe2e2635bd4621399a05e98d1522c0fb6ef442b,PodSandboxId:efb591bb60aa98ccf57f182759a360cdf56b2c6a8448451d45e6f00a0a2c25b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073925573871823,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19477ce7f6a1b08763711c8683d785c97a54166454d06388f00ef69316ef70d3,PodSandboxId:807d476f2ae267751c2eaad333e4c2e8339f86022aa607d5a728856b76cd7616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073643641761945,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fed2fc1-a241-4542-b83d-16093e1981d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.290528050Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3b697d8-efee-46bb-90a6-cad4b377f685 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.290628344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3b697d8-efee-46bb-90a6-cad4b377f685 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.291610420Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03da6a0b-3406-4bfc-acc7-2e400c212864 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.292263460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724075020292240435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03da6a0b-3406-4bfc-acc7-2e400c212864 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.292799573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9487f886-ca5f-45dc-af4f-8ad67fa002d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.292849276Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9487f886-ca5f-45dc-af4f-8ad67fa002d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.293044178Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce37bfb8cdce0afffb49c149eb226f5c4766eeeec26f1da39937debefc27e17e,PodSandboxId:39a67d1ffda30ecf0b26779c9b79172e7e17c42d10ce5e9d78fa1b9d31610db8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938447971748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95hnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3600e8be-6152-427e-b049-52ff758d6b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3798b17b161f5b1af75c78352d94e35fbbbdceefffe887465e356851caab671b,PodSandboxId:e24249efddc1dbb0167bf3189e04aada650ffc25ab11b95e6cb7e98a15f80746,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938392591354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nwzh6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 514f2cfd-88a1-4a1f-be53-4c21a33e9f3b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b117aabf2cf090344b6d163058bf9a4f7af5a7011f40e3b2c84dfc72c1ebc1d,PodSandboxId:5604ce01b5e698f635e75403302a0446327cf787e862172e920175ec7f1b5cac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1724073937863833017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:649e9a31e05f140e89652fac3586255fe09d21be2cd1a7692a37b54947537100,PodSandboxId:0897f86c472a4bf5c6035de3bb2ba7193c8fd026f3580da98f7220a4b10af1d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724073936334375499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cn9sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d8735d-9726-4b29-9e25-ff427831e523,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807d943675d96e3c512c003d16f2f826569a36466eb14842c6d3494849313fe5,PodSandboxId:3dac58c1da5f417f9439f94271c16e1179831244f6bf9e8e81b063e2125aad9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073925666149723,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313639c8d25b086f943d9287cedb068f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa09432aecb2a2b37ba05dad4d5808366acafac98896d466c5aafe664990129,PodSandboxId:e6eba311f6f86419b386650a1c324897dc03af1d2c55874267633967e55e525e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073925612013236,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e1b583833ac0f159fedcfa268e99a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864097e3781d0c4a5b1508c65507fa7c3d87fa7293966bcffd9047cacb6c8e7b,PodSandboxId:1fb5c394291b63d991029604e10f2b4c9875a763474ff727d72b515c52bd782c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073925609198124,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9892b9079966050e46e07583998bab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fbc22569fbb35ce9edabdaebe2e2635bd4621399a05e98d1522c0fb6ef442b,PodSandboxId:efb591bb60aa98ccf57f182759a360cdf56b2c6a8448451d45e6f00a0a2c25b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073925573871823,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19477ce7f6a1b08763711c8683d785c97a54166454d06388f00ef69316ef70d3,PodSandboxId:807d476f2ae267751c2eaad333e4c2e8339f86022aa607d5a728856b76cd7616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073643641761945,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9487f886-ca5f-45dc-af4f-8ad67fa002d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.326504683Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc3aefc8-bf3e-431b-9b90-25e7014a4bca name=/runtime.v1.RuntimeService/Version
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.326579472Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc3aefc8-bf3e-431b-9b90-25e7014a4bca name=/runtime.v1.RuntimeService/Version
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.327541238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=618fd90b-19e8-467e-b549-60dfd500cf82 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.328003315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724075020327981401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=618fd90b-19e8-467e-b549-60dfd500cf82 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.328600708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=641b71fa-e9e3-4aad-abfe-32d4bd8200c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.328648983Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=641b71fa-e9e3-4aad-abfe-32d4bd8200c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:43:40 embed-certs-823470 crio[728]: time="2024-08-19 13:43:40.328922435Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce37bfb8cdce0afffb49c149eb226f5c4766eeeec26f1da39937debefc27e17e,PodSandboxId:39a67d1ffda30ecf0b26779c9b79172e7e17c42d10ce5e9d78fa1b9d31610db8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938447971748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95hnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3600e8be-6152-427e-b049-52ff758d6b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3798b17b161f5b1af75c78352d94e35fbbbdceefffe887465e356851caab671b,PodSandboxId:e24249efddc1dbb0167bf3189e04aada650ffc25ab11b95e6cb7e98a15f80746,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073938392591354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nwzh6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 514f2cfd-88a1-4a1f-be53-4c21a33e9f3b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b117aabf2cf090344b6d163058bf9a4f7af5a7011f40e3b2c84dfc72c1ebc1d,PodSandboxId:5604ce01b5e698f635e75403302a0446327cf787e862172e920175ec7f1b5cac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1724073937863833017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:649e9a31e05f140e89652fac3586255fe09d21be2cd1a7692a37b54947537100,PodSandboxId:0897f86c472a4bf5c6035de3bb2ba7193c8fd026f3580da98f7220a4b10af1d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724073936334375499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cn9sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d8735d-9726-4b29-9e25-ff427831e523,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807d943675d96e3c512c003d16f2f826569a36466eb14842c6d3494849313fe5,PodSandboxId:3dac58c1da5f417f9439f94271c16e1179831244f6bf9e8e81b063e2125aad9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073925666149723,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313639c8d25b086f943d9287cedb068f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa09432aecb2a2b37ba05dad4d5808366acafac98896d466c5aafe664990129,PodSandboxId:e6eba311f6f86419b386650a1c324897dc03af1d2c55874267633967e55e525e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073925612013236,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e1b583833ac0f159fedcfa268e99a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864097e3781d0c4a5b1508c65507fa7c3d87fa7293966bcffd9047cacb6c8e7b,PodSandboxId:1fb5c394291b63d991029604e10f2b4c9875a763474ff727d72b515c52bd782c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073925609198124,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9892b9079966050e46e07583998bab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fbc22569fbb35ce9edabdaebe2e2635bd4621399a05e98d1522c0fb6ef442b,PodSandboxId:efb591bb60aa98ccf57f182759a360cdf56b2c6a8448451d45e6f00a0a2c25b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073925573871823,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19477ce7f6a1b08763711c8683d785c97a54166454d06388f00ef69316ef70d3,PodSandboxId:807d476f2ae267751c2eaad333e4c2e8339f86022aa607d5a728856b76cd7616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073643641761945,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-823470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c885f0b2cbb96f0f93abd846852daa6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=641b71fa-e9e3-4aad-abfe-32d4bd8200c6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ce37bfb8cdce0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Running             coredns                   0                   39a67d1ffda30       coredns-6f6b679f8f-95hnp
	3798b17b161f5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Running             coredns                   0                   e24249efddc1d       coredns-6f6b679f8f-nwzh6
	1b117aabf2cf0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner       0                   5604ce01b5e69       storage-provisioner
	649e9a31e05f1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   18 minutes ago      Running             kube-proxy                0                   0897f86c472a4       kube-proxy-cn9sl
	807d943675d96       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   18 minutes ago      Running             kube-scheduler            2                   3dac58c1da5f4       kube-scheduler-embed-certs-823470
	6fa09432aecb2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   18 minutes ago      Running             kube-controller-manager   2                   e6eba311f6f86       kube-controller-manager-embed-certs-823470
	864097e3781d0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   18 minutes ago      Running             etcd                      2                   1fb5c394291b6       etcd-embed-certs-823470
	73fbc22569fbb       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   18 minutes ago      Running             kube-apiserver            2                   efb591bb60aa9       kube-apiserver-embed-certs-823470
	19477ce7f6a1b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   22 minutes ago      Exited              kube-apiserver            1                   807d476f2ae26       kube-apiserver-embed-certs-823470
	
	
	==> coredns [3798b17b161f5b1af75c78352d94e35fbbbdceefffe887465e356851caab671b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ce37bfb8cdce0afffb49c149eb226f5c4766eeeec26f1da39937debefc27e17e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-823470
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-823470
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=embed-certs-823470
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T13_25_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 13:25:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-823470
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 13:43:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 13:40:59 +0000   Mon, 19 Aug 2024 13:25:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 13:40:59 +0000   Mon, 19 Aug 2024 13:25:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 13:40:59 +0000   Mon, 19 Aug 2024 13:25:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 13:40:59 +0000   Mon, 19 Aug 2024 13:25:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.50
	  Hostname:    embed-certs-823470
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 74fdc2cbc3ab4025aaf85ade139f8d77
	  System UUID:                74fdc2cb-c3ab-4025-aaf8-5ade139f8d77
	  Boot ID:                    540c561b-6bfd-49ea-8248-6fdf75e50995
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-95hnp                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 coredns-6f6b679f8f-nwzh6                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 etcd-embed-certs-823470                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kube-apiserver-embed-certs-823470             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-embed-certs-823470    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-cn9sl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-embed-certs-823470             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 metrics-server-6867b74b74-hkt29               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         18m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node embed-certs-823470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node embed-certs-823470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node embed-certs-823470 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m   node-controller  Node embed-certs-823470 event: Registered Node embed-certs-823470 in Controller
	
	
	==> dmesg <==
	[  +0.048452] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.082771] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.594618] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.600438] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.405228] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.056785] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056539] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.180386] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.135179] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.284597] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.397684] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.060905] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.032226] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +5.633974] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.809031] kauditd_printk_skb: 52 callbacks suppressed
	[Aug19 13:21] kauditd_printk_skb: 33 callbacks suppressed
	[Aug19 13:25] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.193737] systemd-fstab-generator[2572]: Ignoring "noauto" option for root device
	[  +4.475724] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.578120] systemd-fstab-generator[2895]: Ignoring "noauto" option for root device
	[  +5.671622] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.280785] systemd-fstab-generator[3079]: Ignoring "noauto" option for root device
	[  +9.255645] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [864097e3781d0c4a5b1508c65507fa7c3d87fa7293966bcffd9047cacb6c8e7b] <==
	{"level":"info","ts":"2024-08-19T13:25:26.703856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eeb5a3f186171c82 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-19T13:25:26.703896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eeb5a3f186171c82 received MsgPreVoteResp from eeb5a3f186171c82 at term 1"}
	{"level":"info","ts":"2024-08-19T13:25:26.703926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eeb5a3f186171c82 became candidate at term 2"}
	{"level":"info","ts":"2024-08-19T13:25:26.703951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eeb5a3f186171c82 received MsgVoteResp from eeb5a3f186171c82 at term 2"}
	{"level":"info","ts":"2024-08-19T13:25:26.703977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eeb5a3f186171c82 became leader at term 2"}
	{"level":"info","ts":"2024-08-19T13:25:26.704002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: eeb5a3f186171c82 elected leader eeb5a3f186171c82 at term 2"}
	{"level":"info","ts":"2024-08-19T13:25:26.707821Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:25:26.709968Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"eeb5a3f186171c82","local-member-attributes":"{Name:embed-certs-823470 ClientURLs:[https://192.168.61.50:2379]}","request-path":"/0/members/eeb5a3f186171c82/attributes","cluster-id":"4c0bf4743f00d511","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T13:25:26.710132Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:25:26.710437Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4c0bf4743f00d511","local-member-id":"eeb5a3f186171c82","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:25:26.710527Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:25:26.712712Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:25:26.712768Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T13:25:26.712793Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T13:25:26.712836Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:25:26.713800Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:25:26.718336Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T13:25:26.723260Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:25:26.724028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.50:2379"}
	{"level":"info","ts":"2024-08-19T13:35:26.762926Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":686}
	{"level":"info","ts":"2024-08-19T13:35:26.771243Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":686,"took":"7.573113ms","hash":2537882096,"current-db-size-bytes":2211840,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2211840,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-19T13:35:26.771350Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2537882096,"revision":686,"compact-revision":-1}
	{"level":"info","ts":"2024-08-19T13:40:26.771804Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":929}
	{"level":"info","ts":"2024-08-19T13:40:26.775721Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":929,"took":"3.115719ms","hash":1065318077,"current-db-size-bytes":2211840,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-19T13:40:26.775805Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1065318077,"revision":929,"compact-revision":686}
	
	
	==> kernel <==
	 13:43:40 up 23 min,  0 users,  load average: 0.19, 0.26, 0.18
	Linux embed-certs-823470 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [19477ce7f6a1b08763711c8683d785c97a54166454d06388f00ef69316ef70d3] <==
	W0819 13:25:19.946455       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:19.957968       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:19.983522       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:19.987881       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:20.012504       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:20.038074       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:20.110390       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:20.125244       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:20.145296       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:20.254868       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.400133       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.491577       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.753855       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.785066       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.908891       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.924405       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.952051       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:22.987101       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:23.044984       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:23.103064       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:23.107570       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:23.171573       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:23.172961       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:23.199947       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:23.225501       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [73fbc22569fbb35ce9edabdaebe2e2635bd4621399a05e98d1522c0fb6ef442b] <==
	W0819 13:40:29.298045       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:40:29.298170       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 13:40:29.299188       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:40:29.299231       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 13:41:29.300034       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:41:29.300126       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 13:41:29.300177       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:41:29.300190       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 13:41:29.301323       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:41:29.301361       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 13:43:29.302073       1 handler_proxy.go:99] no RequestInfo found in the context
	W0819 13:43:29.302073       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:43:29.302459       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0819 13:43:29.302561       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 13:43:29.303745       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:43:29.303865       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6fa09432aecb2a2b37ba05dad4d5808366acafac98896d466c5aafe664990129] <==
	E0819 13:38:35.416037       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:38:35.879245       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:39:05.423840       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:39:05.890030       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:39:35.430929       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:39:35.898181       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:40:05.437489       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:40:05.904808       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:40:35.444856       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:40:35.912568       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 13:40:59.832213       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-823470"
	E0819 13:41:05.450486       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:41:05.920943       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:41:35.456820       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:41:35.928220       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 13:41:59.876067       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="275.032µs"
	E0819 13:42:05.463334       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:42:05.936470       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 13:42:12.870622       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="151.745µs"
	E0819 13:42:35.469951       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:42:35.944078       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:43:05.476098       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:43:05.951865       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:43:35.482362       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:43:35.959351       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [649e9a31e05f140e89652fac3586255fe09d21be2cd1a7692a37b54947537100] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 13:25:36.625124       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 13:25:36.636535       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.50"]
	E0819 13:25:36.636616       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 13:25:36.706916       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 13:25:36.706978       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 13:25:36.707011       1 server_linux.go:169] "Using iptables Proxier"
	I0819 13:25:36.712316       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 13:25:36.712555       1 server.go:483] "Version info" version="v1.31.0"
	I0819 13:25:36.712584       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:25:36.717426       1 config.go:197] "Starting service config controller"
	I0819 13:25:36.717746       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 13:25:36.717848       1 config.go:104] "Starting endpoint slice config controller"
	I0819 13:25:36.717870       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 13:25:36.718360       1 config.go:326] "Starting node config controller"
	I0819 13:25:36.719743       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 13:25:36.818786       1 shared_informer.go:320] Caches are synced for service config
	I0819 13:25:36.818775       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 13:25:36.820233       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [807d943675d96e3c512c003d16f2f826569a36466eb14842c6d3494849313fe5] <==
	W0819 13:25:29.175987       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:25:29.176056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.176154       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 13:25:29.176196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.238457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 13:25:29.238511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.240233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 13:25:29.240809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.256893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 13:25:29.256939       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.295443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 13:25:29.295761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.307879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 13:25:29.307938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.333980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 13:25:29.334153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.434387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 13:25:29.434439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.532195       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:25:29.532250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:25:29.554337       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 13:25:29.554392       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 13:25:29.598174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 13:25:29.598225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 13:25:31.427272       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 13:42:30 embed-certs-823470 kubelet[2902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 13:42:31 embed-certs-823470 kubelet[2902]: E0819 13:42:31.167012    2902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074951166802767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:42:31 embed-certs-823470 kubelet[2902]: E0819 13:42:31.167062    2902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074951166802767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:42:38 embed-certs-823470 kubelet[2902]: E0819 13:42:38.857711    2902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hkt29" podUID="8280db9d-3e31-4689-a2bc-fdcc8ca7f129"
	Aug 19 13:42:41 embed-certs-823470 kubelet[2902]: E0819 13:42:41.169378    2902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074961168959436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:42:41 embed-certs-823470 kubelet[2902]: E0819 13:42:41.169422    2902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074961168959436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:42:51 embed-certs-823470 kubelet[2902]: E0819 13:42:51.171035    2902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074971170427117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:42:51 embed-certs-823470 kubelet[2902]: E0819 13:42:51.171318    2902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074971170427117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:42:53 embed-certs-823470 kubelet[2902]: E0819 13:42:53.857417    2902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hkt29" podUID="8280db9d-3e31-4689-a2bc-fdcc8ca7f129"
	Aug 19 13:43:01 embed-certs-823470 kubelet[2902]: E0819 13:43:01.172555    2902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074981172219919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:43:01 embed-certs-823470 kubelet[2902]: E0819 13:43:01.172602    2902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074981172219919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:43:06 embed-certs-823470 kubelet[2902]: E0819 13:43:06.858624    2902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hkt29" podUID="8280db9d-3e31-4689-a2bc-fdcc8ca7f129"
	Aug 19 13:43:11 embed-certs-823470 kubelet[2902]: E0819 13:43:11.174107    2902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074991173809094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:43:11 embed-certs-823470 kubelet[2902]: E0819 13:43:11.174422    2902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074991173809094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:43:19 embed-certs-823470 kubelet[2902]: E0819 13:43:19.857617    2902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hkt29" podUID="8280db9d-3e31-4689-a2bc-fdcc8ca7f129"
	Aug 19 13:43:21 embed-certs-823470 kubelet[2902]: E0819 13:43:21.176222    2902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724075001175860020,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:43:21 embed-certs-823470 kubelet[2902]: E0819 13:43:21.176309    2902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724075001175860020,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:43:30 embed-certs-823470 kubelet[2902]: E0819 13:43:30.876030    2902 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 13:43:30 embed-certs-823470 kubelet[2902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 13:43:30 embed-certs-823470 kubelet[2902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 13:43:30 embed-certs-823470 kubelet[2902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 13:43:30 embed-certs-823470 kubelet[2902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 13:43:31 embed-certs-823470 kubelet[2902]: E0819 13:43:31.177515    2902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724075011177129859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:43:31 embed-certs-823470 kubelet[2902]: E0819 13:43:31.177543    2902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724075011177129859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:43:34 embed-certs-823470 kubelet[2902]: E0819 13:43:34.859497    2902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hkt29" podUID="8280db9d-3e31-4689-a2bc-fdcc8ca7f129"
	
	
	==> storage-provisioner [1b117aabf2cf090344b6d163058bf9a4f7af5a7011f40e3b2c84dfc72c1ebc1d] <==
	I0819 13:25:38.070064       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 13:25:38.124239       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 13:25:38.125420       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 13:25:38.140262       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 13:25:38.140439       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-823470_96ffcd4e-b568-448b-b18e-68aac759b662!
	I0819 13:25:38.141472       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0828c11d-0ca3-4e25-9d27-447d187659cf", APIVersion:"v1", ResourceVersion:"386", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-823470_96ffcd4e-b568-448b-b18e-68aac759b662 became leader
	I0819 13:25:38.240850       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-823470_96ffcd4e-b568-448b-b18e-68aac759b662!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-823470 -n embed-certs-823470
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-823470 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-hkt29
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-823470 describe pod metrics-server-6867b74b74-hkt29
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-823470 describe pod metrics-server-6867b74b74-hkt29: exit status 1 (64.801005ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-hkt29" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-823470 describe pod metrics-server-6867b74b74-hkt29: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (530.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (303.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-441627 -n no-preload-441627
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-19 13:40:29.1013478 +0000 UTC m=+6229.736138092
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-441627 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-441627 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.99µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-441627 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-441627 -n no-preload-441627
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-441627 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-441627 logs -n 25: (2.138252659s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-093052             | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-093052                  | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-093052 --memory=2200 --alsologtostderr   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-038572  | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC |                     |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-093052 image list                           | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-441627                  | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-620467        | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-823470            | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-038572       | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:24 UTC |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-620467             | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-823470                 | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC | 19 Aug 24 13:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:40 UTC | 19 Aug 24 13:40 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 13:17:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 13:17:40.922225 2153523 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:17:40.922462 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922470 2153523 out.go:358] Setting ErrFile to fd 2...
	I0819 13:17:40.922475 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922631 2153523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:17:40.923154 2153523 out.go:352] Setting JSON to false
	I0819 13:17:40.924134 2153523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":165611,"bootTime":1723907850,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:17:40.924233 2153523 start.go:139] virtualization: kvm guest
	I0819 13:17:40.927254 2153523 out.go:177] * [embed-certs-823470] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:17:40.928415 2153523 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:17:40.928457 2153523 notify.go:220] Checking for updates...
	I0819 13:17:40.930421 2153523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:17:40.931440 2153523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:17:40.932454 2153523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:17:40.933815 2153523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:17:40.935189 2153523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:17:40.936761 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:17:40.937143 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.937209 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.952461 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0819 13:17:40.952868 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.953423 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.953455 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.953809 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.954009 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:40.954280 2153523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:17:40.954572 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.954610 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.970308 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0819 13:17:40.970901 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.971446 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.971478 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.971798 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.972002 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:41.006606 2153523 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 13:17:41.007689 2153523 start.go:297] selected driver: kvm2
	I0819 13:17:41.007712 2153523 start.go:901] validating driver "kvm2" against &{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.007878 2153523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:17:41.008964 2153523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.009068 2153523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:17:41.024433 2153523 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:17:41.024970 2153523 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:17:41.025027 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:17:41.025044 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:17:41.025112 2153523 start.go:340] cluster config:
	{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.025259 2153523 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.027074 2153523 out.go:177] * Starting "embed-certs-823470" primary control-plane node in "embed-certs-823470" cluster
	I0819 13:17:37.004434 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:41.028219 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:17:41.028273 2153523 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:17:41.028296 2153523 cache.go:56] Caching tarball of preloaded images
	I0819 13:17:41.028407 2153523 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:17:41.028424 2153523 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 13:17:41.028580 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:17:41.028861 2153523 start.go:360] acquireMachinesLock for embed-certs-823470: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:17:43.084440 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:46.156450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:52.236466 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:55.308454 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:01.388472 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:04.460482 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:10.540493 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:13.612449 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:19.692412 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:22.764436 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:28.844443 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:31.916476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:37.996470 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:41.068499 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:47.148488 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:50.220507 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:56.300450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:59.372511 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:05.452504 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:08.524521 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:14.604456 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:17.676406 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:23.756476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:26.828515 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:32.908445 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:35.980361 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:38.985083 2152365 start.go:364] duration metric: took 4m0.580875322s to acquireMachinesLock for "default-k8s-diff-port-038572"
	I0819 13:19:38.985152 2152365 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:38.985171 2152365 fix.go:54] fixHost starting: 
	I0819 13:19:38.985542 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:38.985578 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:39.001015 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I0819 13:19:39.001515 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:39.002024 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:19:39.002048 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:39.002379 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:39.002572 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:39.002734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:19:39.004320 2152365 fix.go:112] recreateIfNeeded on default-k8s-diff-port-038572: state=Stopped err=<nil>
	I0819 13:19:39.004361 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	W0819 13:19:39.004545 2152365 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:39.006073 2152365 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-038572" ...
	I0819 13:19:38.982266 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:38.982325 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982657 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:19:38.982688 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982924 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:19:38.984929 2151869 machine.go:96] duration metric: took 4m37.397817873s to provisionDockerMachine
	I0819 13:19:38.984974 2151869 fix.go:56] duration metric: took 4m37.421262151s for fixHost
	I0819 13:19:38.984981 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 4m37.421326953s
	W0819 13:19:38.985006 2151869 start.go:714] error starting host: provision: host is not running
	W0819 13:19:38.985119 2151869 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 13:19:38.985132 2151869 start.go:729] Will try again in 5 seconds ...
	I0819 13:19:39.007314 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Start
	I0819 13:19:39.007497 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring networks are active...
	I0819 13:19:39.008358 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network default is active
	I0819 13:19:39.008748 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network mk-default-k8s-diff-port-038572 is active
	I0819 13:19:39.009091 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Getting domain xml...
	I0819 13:19:39.009732 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Creating domain...
	I0819 13:19:40.228941 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting to get IP...
	I0819 13:19:40.229930 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230312 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230369 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.230279 2153980 retry.go:31] will retry after 222.94238ms: waiting for machine to come up
	I0819 13:19:40.455010 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455640 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.455562 2153980 retry.go:31] will retry after 314.241366ms: waiting for machine to come up
	I0819 13:19:40.771105 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771564 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771593 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.771511 2153980 retry.go:31] will retry after 475.191571ms: waiting for machine to come up
	I0819 13:19:41.248037 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248436 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248462 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.248377 2153980 retry.go:31] will retry after 547.125638ms: waiting for machine to come up
	I0819 13:19:41.797263 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797702 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.797653 2153980 retry.go:31] will retry after 581.6163ms: waiting for machine to come up
	I0819 13:19:42.380743 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381257 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:42.381195 2153980 retry.go:31] will retry after 661.080111ms: waiting for machine to come up
	I0819 13:19:43.044143 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044615 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044645 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.044566 2153980 retry.go:31] will retry after 815.359371ms: waiting for machine to come up
	I0819 13:19:43.986791 2151869 start.go:360] acquireMachinesLock for no-preload-441627: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:19:43.861792 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862277 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.862175 2153980 retry.go:31] will retry after 1.076851262s: waiting for machine to come up
	I0819 13:19:44.940944 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941544 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:44.941431 2153980 retry.go:31] will retry after 1.492394345s: waiting for machine to come up
	I0819 13:19:46.436200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436720 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:46.436617 2153980 retry.go:31] will retry after 1.567728586s: waiting for machine to come up
	I0819 13:19:48.006390 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006917 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:48.006810 2153980 retry.go:31] will retry after 1.799313771s: waiting for machine to come up
	I0819 13:19:49.807716 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808103 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808135 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:49.808029 2153980 retry.go:31] will retry after 2.910106345s: waiting for machine to come up
	I0819 13:19:52.722235 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722644 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:52.722554 2153980 retry.go:31] will retry after 2.846975794s: waiting for machine to come up
	I0819 13:19:56.801408 2152780 start.go:364] duration metric: took 3m10.884201023s to acquireMachinesLock for "old-k8s-version-620467"
	I0819 13:19:56.801488 2152780 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:56.801498 2152780 fix.go:54] fixHost starting: 
	I0819 13:19:56.802005 2152780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:56.802046 2152780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:56.819893 2152780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0819 13:19:56.820323 2152780 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:56.820785 2152780 main.go:141] libmachine: Using API Version  1
	I0819 13:19:56.820813 2152780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:56.821126 2152780 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:56.821314 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:19:56.821452 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetState
	I0819 13:19:56.823015 2152780 fix.go:112] recreateIfNeeded on old-k8s-version-620467: state=Stopped err=<nil>
	I0819 13:19:56.823056 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	W0819 13:19:56.823300 2152780 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:56.824822 2152780 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-620467" ...
	I0819 13:19:55.571761 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572317 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Found IP for machine: 192.168.72.69
	I0819 13:19:55.572351 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has current primary IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserving static IP address...
	I0819 13:19:55.572819 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.572863 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | skip adding static IP to network mk-default-k8s-diff-port-038572 - found existing host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"}
	I0819 13:19:55.572879 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserved static IP address: 192.168.72.69
	I0819 13:19:55.572897 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for SSH to be available...
	I0819 13:19:55.572914 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Getting to WaitForSSH function...
	I0819 13:19:55.575208 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575636 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.575662 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575751 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH client type: external
	I0819 13:19:55.575797 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa (-rw-------)
	I0819 13:19:55.575834 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:19:55.575851 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | About to run SSH command:
	I0819 13:19:55.575895 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | exit 0
	I0819 13:19:55.696082 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | SSH cmd err, output: <nil>: 
	I0819 13:19:55.696501 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetConfigRaw
	I0819 13:19:55.697226 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:55.699788 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700112 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.700146 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700429 2152365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/config.json ...
	I0819 13:19:55.700717 2152365 machine.go:93] provisionDockerMachine start ...
	I0819 13:19:55.700742 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:55.700975 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.703239 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703521 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.703556 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.703887 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704055 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.704360 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.704635 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.704651 2152365 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:19:55.804602 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:19:55.804632 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.804922 2152365 buildroot.go:166] provisioning hostname "default-k8s-diff-port-038572"
	I0819 13:19:55.804951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.805155 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.807793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.808144 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808318 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.808510 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808641 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808750 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.808888 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.809108 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.809123 2152365 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-038572 && echo "default-k8s-diff-port-038572" | sudo tee /etc/hostname
	I0819 13:19:55.922378 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-038572
	
	I0819 13:19:55.922428 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.925767 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926131 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.926172 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926350 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.926563 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926747 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.927130 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.927403 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.927433 2152365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-038572' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-038572/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-038572' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:19:56.033031 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:56.033082 2152365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:19:56.033108 2152365 buildroot.go:174] setting up certificates
	I0819 13:19:56.033121 2152365 provision.go:84] configureAuth start
	I0819 13:19:56.033132 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:56.033450 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.036371 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.036898 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.036926 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.037063 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.039494 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039814 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.039844 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039962 2152365 provision.go:143] copyHostCerts
	I0819 13:19:56.040055 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:19:56.040073 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:19:56.040181 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:19:56.040329 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:19:56.040342 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:19:56.040394 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:19:56.040582 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:19:56.040611 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:19:56.040666 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:19:56.040752 2152365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-038572 san=[127.0.0.1 192.168.72.69 default-k8s-diff-port-038572 localhost minikube]
	I0819 13:19:56.155224 2152365 provision.go:177] copyRemoteCerts
	I0819 13:19:56.155322 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:19:56.155366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.157977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158360 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.158393 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158573 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.158780 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.159002 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.159203 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.238700 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:19:56.263555 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0819 13:19:56.288212 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:19:56.311817 2152365 provision.go:87] duration metric: took 278.679594ms to configureAuth
	I0819 13:19:56.311851 2152365 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:19:56.312067 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:19:56.312187 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.315149 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315598 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.315630 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315801 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.315977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316115 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316244 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.316426 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.316653 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.316770 2152365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:19:56.575936 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:19:56.575977 2152365 machine.go:96] duration metric: took 875.241726ms to provisionDockerMachine
	I0819 13:19:56.576012 2152365 start.go:293] postStartSetup for "default-k8s-diff-port-038572" (driver="kvm2")
	I0819 13:19:56.576030 2152365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:19:56.576068 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.576426 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:19:56.576468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.579017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579420 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.579453 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579553 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.579726 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.579891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.580030 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.661017 2152365 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:19:56.665287 2152365 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:19:56.665315 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:19:56.665387 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:19:56.665469 2152365 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:19:56.665559 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:19:56.675115 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:19:56.698728 2152365 start.go:296] duration metric: took 122.697439ms for postStartSetup
	I0819 13:19:56.698771 2152365 fix.go:56] duration metric: took 17.713602497s for fixHost
	I0819 13:19:56.698793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.701384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701733 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.701766 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.702116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702284 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702444 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.702625 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.702822 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.702836 2152365 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:19:56.801168 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073596.772802095
	
	I0819 13:19:56.801210 2152365 fix.go:216] guest clock: 1724073596.772802095
	I0819 13:19:56.801223 2152365 fix.go:229] Guest: 2024-08-19 13:19:56.772802095 +0000 UTC Remote: 2024-08-19 13:19:56.698775345 +0000 UTC m=+258.438276725 (delta=74.02675ms)
	I0819 13:19:56.801278 2152365 fix.go:200] guest clock delta is within tolerance: 74.02675ms
	I0819 13:19:56.801287 2152365 start.go:83] releasing machines lock for "default-k8s-diff-port-038572", held for 17.81615508s
	I0819 13:19:56.801326 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.801669 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.804805 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805198 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.805242 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805377 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806035 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806291 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806387 2152365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:19:56.806442 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.806562 2152365 ssh_runner.go:195] Run: cat /version.json
	I0819 13:19:56.806592 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.809097 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809446 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.809476 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809513 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809599 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.809776 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.809951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810028 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.810053 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.810115 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.810256 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.810456 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.810627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810777 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.905573 2152365 ssh_runner.go:195] Run: systemctl --version
	I0819 13:19:56.911395 2152365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:19:57.053460 2152365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:19:57.060012 2152365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:19:57.060091 2152365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:19:57.076651 2152365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:19:57.076678 2152365 start.go:495] detecting cgroup driver to use...
	I0819 13:19:57.076746 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:19:57.095982 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:19:57.109198 2152365 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:19:57.109282 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:19:57.124552 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:19:57.141665 2152365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:19:57.250580 2152365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:19:57.395902 2152365 docker.go:233] disabling docker service ...
	I0819 13:19:57.395986 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:19:57.411232 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:19:57.424263 2152365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:19:57.576953 2152365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:19:57.695946 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:19:57.709501 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:19:57.727815 2152365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:19:57.727889 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.738210 2152365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:19:57.738284 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.748534 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.759949 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.770744 2152365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:19:57.781844 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.792047 2152365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.810916 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.826741 2152365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:19:57.839687 2152365 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:19:57.839753 2152365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:19:57.855192 2152365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:19:57.865326 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:19:57.991646 2152365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:19:58.136425 2152365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:19:58.136522 2152365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:19:58.141971 2152365 start.go:563] Will wait 60s for crictl version
	I0819 13:19:58.142047 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:19:58.146029 2152365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:19:58.182474 2152365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:19:58.182580 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.212887 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.242578 2152365 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:19:58.243711 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:58.246652 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.246985 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:58.247031 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.247262 2152365 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 13:19:58.251445 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:19:58.266177 2152365 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:19:58.266324 2152365 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:19:58.266417 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:19:56.826004 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .Start
	I0819 13:19:56.826161 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring networks are active...
	I0819 13:19:56.826953 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network default is active
	I0819 13:19:56.827328 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network mk-old-k8s-version-620467 is active
	I0819 13:19:56.827789 2152780 main.go:141] libmachine: (old-k8s-version-620467) Getting domain xml...
	I0819 13:19:56.828485 2152780 main.go:141] libmachine: (old-k8s-version-620467) Creating domain...
	I0819 13:19:58.099058 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting to get IP...
	I0819 13:19:58.099866 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.100256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.100359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.100246 2154107 retry.go:31] will retry after 217.468132ms: waiting for machine to come up
	I0819 13:19:58.319821 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.320335 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.320359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.320295 2154107 retry.go:31] will retry after 270.947251ms: waiting for machine to come up
	I0819 13:19:58.592897 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.593586 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.593610 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.593526 2154107 retry.go:31] will retry after 389.864812ms: waiting for machine to come up
	I0819 13:19:58.985269 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.985722 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.985757 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.985674 2154107 retry.go:31] will retry after 559.916587ms: waiting for machine to come up
	I0819 13:19:59.547846 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:59.548504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:59.548538 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:59.548446 2154107 retry.go:31] will retry after 759.198751ms: waiting for machine to come up
	I0819 13:20:00.309569 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:00.310093 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:00.310126 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:00.310041 2154107 retry.go:31] will retry after 836.488095ms: waiting for machine to come up
	I0819 13:19:58.306336 2152365 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:19:58.306490 2152365 ssh_runner.go:195] Run: which lz4
	I0819 13:19:58.311147 2152365 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:19:58.315715 2152365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:19:58.315751 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:19:59.751536 2152365 crio.go:462] duration metric: took 1.440444093s to copy over tarball
	I0819 13:19:59.751611 2152365 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:01.909604 2152365 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157959479s)
	I0819 13:20:01.909646 2152365 crio.go:469] duration metric: took 2.158078046s to extract the tarball
	I0819 13:20:01.909667 2152365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:01.954789 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:02.002755 2152365 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:02.002786 2152365 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:02.002805 2152365 kubeadm.go:934] updating node { 192.168.72.69 8444 v1.31.0 crio true true} ...
	I0819 13:20:02.002970 2152365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-038572 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:02.003061 2152365 ssh_runner.go:195] Run: crio config
	I0819 13:20:02.061207 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:02.061238 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:02.061256 2152365 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:02.061295 2152365 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.69 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-038572 NodeName:default-k8s-diff-port-038572 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:02.061496 2152365 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.69
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-038572"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:02.061628 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:02.072573 2152365 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:02.072676 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:02.083114 2152365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0819 13:20:02.100736 2152365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:02.117359 2152365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 13:20:02.135642 2152365 ssh_runner.go:195] Run: grep 192.168.72.69	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:02.139682 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:02.152242 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:02.278949 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:02.296896 2152365 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572 for IP: 192.168.72.69
	I0819 13:20:02.296925 2152365 certs.go:194] generating shared ca certs ...
	I0819 13:20:02.296949 2152365 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:02.297166 2152365 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:02.297230 2152365 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:02.297248 2152365 certs.go:256] generating profile certs ...
	I0819 13:20:02.297375 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/client.key
	I0819 13:20:02.297441 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key.ca317f39
	I0819 13:20:02.297489 2152365 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key
	I0819 13:20:02.297641 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:02.297692 2152365 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:02.297705 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:02.297730 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:02.297752 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:02.297778 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:02.297821 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:02.298812 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:02.332660 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:02.376995 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:02.418243 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:02.455354 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 13:20:02.488839 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:02.514418 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:02.540797 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:20:02.565628 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:02.590617 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:02.614942 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:02.639455 2152365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:02.657597 2152365 ssh_runner.go:195] Run: openssl version
	I0819 13:20:02.663545 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:02.674396 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679061 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679138 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.685136 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:02.696428 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:02.707271 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.711988 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.712052 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.717933 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:02.728982 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:02.739866 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744728 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744799 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.750660 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:02.761203 2152365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:02.765996 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:02.772371 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:02.778453 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:02.784645 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:02.790342 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:02.795882 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:02.801355 2152365 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:02.801448 2152365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:02.801536 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.839586 2152365 cri.go:89] found id: ""
	I0819 13:20:02.839684 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:02.850336 2152365 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:02.850363 2152365 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:02.850424 2152365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:02.860676 2152365 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:02.861753 2152365 kubeconfig.go:125] found "default-k8s-diff-port-038572" server: "https://192.168.72.69:8444"
	I0819 13:20:02.863979 2152365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:02.875284 2152365 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.69
	I0819 13:20:02.875321 2152365 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:02.875337 2152365 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:02.875398 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.914277 2152365 cri.go:89] found id: ""
	I0819 13:20:02.914383 2152365 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:02.931976 2152365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:02.941897 2152365 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:02.941924 2152365 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:02.941994 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 13:20:02.951103 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:02.951179 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:02.960642 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 13:20:02.970204 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:02.970278 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:02.980235 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.989306 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:02.989371 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.998821 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 13:20:03.007907 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:03.007978 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:03.017698 2152365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:03.027048 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:03.158491 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:01.148388 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:01.148845 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:01.148874 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:01.148801 2154107 retry.go:31] will retry after 1.045849266s: waiting for machine to come up
	I0819 13:20:02.196768 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:02.197307 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:02.197341 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:02.197243 2154107 retry.go:31] will retry after 1.09397017s: waiting for machine to come up
	I0819 13:20:03.292733 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:03.293195 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:03.293233 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:03.293141 2154107 retry.go:31] will retry after 1.29393597s: waiting for machine to come up
	I0819 13:20:04.588965 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:04.589429 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:04.589463 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:04.589356 2154107 retry.go:31] will retry after 1.963033352s: waiting for machine to come up
	I0819 13:20:04.318435 2152365 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.159898195s)
	I0819 13:20:04.318475 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.536005 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.600370 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.701830 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:04.701954 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.202875 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.702925 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.718529 2152365 api_server.go:72] duration metric: took 1.01671668s to wait for apiserver process to appear ...
	I0819 13:20:05.718570 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:05.718598 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.393400 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.393440 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.393459 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.434960 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.434992 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.719450 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.725655 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:08.725689 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.218770 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.224234 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:09.224273 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.718902 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.727048 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:20:09.733609 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:09.733641 2152365 api_server.go:131] duration metric: took 4.015063389s to wait for apiserver health ...
	I0819 13:20:09.733652 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:09.733660 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:09.735422 2152365 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:06.553692 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:06.554176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:06.554208 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:06.554119 2154107 retry.go:31] will retry after 2.420122298s: waiting for machine to come up
	I0819 13:20:08.975682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:08.976129 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:08.976176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:08.976073 2154107 retry.go:31] will retry after 2.64333029s: waiting for machine to come up
	I0819 13:20:09.736901 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:09.747568 2152365 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:09.766156 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:09.777905 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:09.777944 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:09.777953 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:09.777963 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:09.777978 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:09.777990 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:20:09.778006 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:09.778017 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:09.778029 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:20:09.778040 2152365 system_pods.go:74] duration metric: took 11.855811ms to wait for pod list to return data ...
	I0819 13:20:09.778050 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:09.787946 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:09.787991 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:09.788010 2152365 node_conditions.go:105] duration metric: took 9.95462ms to run NodePressure ...
	I0819 13:20:09.788035 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:10.079231 2152365 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083839 2152365 kubeadm.go:739] kubelet initialised
	I0819 13:20:10.083865 2152365 kubeadm.go:740] duration metric: took 4.599976ms waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083876 2152365 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:10.089020 2152365 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.094701 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094738 2152365 pod_ready.go:82] duration metric: took 5.686754ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.094751 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094760 2152365 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.099257 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099292 2152365 pod_ready.go:82] duration metric: took 4.514562ms for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.099308 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099317 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.103745 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103778 2152365 pod_ready.go:82] duration metric: took 4.449246ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.103799 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103811 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.170603 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170636 2152365 pod_ready.go:82] duration metric: took 66.80935ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.170649 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170655 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.569837 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569869 2152365 pod_ready.go:82] duration metric: took 399.203757ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.569879 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569892 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.970515 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970553 2152365 pod_ready.go:82] duration metric: took 400.651091ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.970571 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970600 2152365 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:11.371258 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371306 2152365 pod_ready.go:82] duration metric: took 400.692872ms for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:11.371324 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371335 2152365 pod_ready.go:39] duration metric: took 1.287441542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:11.371365 2152365 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:20:11.387535 2152365 ops.go:34] apiserver oom_adj: -16
	I0819 13:20:11.387559 2152365 kubeadm.go:597] duration metric: took 8.537187666s to restartPrimaryControlPlane
	I0819 13:20:11.387568 2152365 kubeadm.go:394] duration metric: took 8.586226152s to StartCluster
	I0819 13:20:11.387585 2152365 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.387664 2152365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:11.389218 2152365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.389474 2152365 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:20:11.389580 2152365 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:20:11.389706 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:11.389711 2152365 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389755 2152365 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-038572"
	I0819 13:20:11.389764 2152365 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-038572"
	W0819 13:20:11.389767 2152365 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:20:11.389751 2152365 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389804 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.389806 2152365 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.389818 2152365 addons.go:243] addon metrics-server should already be in state true
	I0819 13:20:11.389827 2152365 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-038572"
	I0819 13:20:11.389849 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.390217 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390246 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390254 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390283 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390305 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390341 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.391417 2152365 out.go:177] * Verifying Kubernetes components...
	I0819 13:20:11.392645 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:11.406609 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0819 13:20:11.407081 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.407694 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.407723 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.408143 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.408421 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.411206 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0819 13:20:11.411468 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0819 13:20:11.411740 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.411900 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.412296 2152365 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.412319 2152365 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:20:11.412335 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412351 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.412393 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412482 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412508 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412744 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.412758 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.412793 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.412794 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.413326 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413350 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413374 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.413399 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.429280 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40269
	I0819 13:20:11.429783 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.430330 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.430354 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.430797 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.430981 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.433018 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.433214 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I0819 13:20:11.433651 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.433708 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0819 13:20:11.434138 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434152 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434199 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.434455 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.434636 2152365 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:11.434806 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434830 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434988 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.435019 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.435164 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.435375 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.435924 2152365 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.435944 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:20:11.435965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.437309 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.438592 2152365 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:20:11.439482 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.439584 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:20:11.439598 2152365 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:20:11.439617 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.439905 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.439943 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.440077 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.440243 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.440399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.440517 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.442627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.442965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.442990 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.443134 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.443274 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.443500 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.443627 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.457318 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0819 13:20:11.457870 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.458444 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.458481 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.458854 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.459017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.460870 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.461105 2152365 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.461121 2152365 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:20:11.461138 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.463902 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.464415 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464581 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.464764 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.464942 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.465089 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.620783 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:11.641158 2152365 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:11.745067 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.783221 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:20:11.783260 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:20:11.791134 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.848325 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:20:11.848359 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:20:11.955559 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:11.955670 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:20:12.022022 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:12.892823 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.101637914s)
	I0819 13:20:12.892896 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.892909 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.892966 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147858081s)
	I0819 13:20:12.893019 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893033 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893201 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893327 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893354 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893366 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893375 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893377 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893386 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893391 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893630 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893681 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893704 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893677 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893729 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893657 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.899954 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.899970 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.900202 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.900220 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.900223 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972429 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972465 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.972842 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972866 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.972882 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.972897 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.973181 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.973225 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.973235 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.973289 2152365 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-038572"
	I0819 13:20:12.975602 2152365 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:20:12.976614 2152365 addons.go:510] duration metric: took 1.587039687s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:20:15.704949 2153523 start.go:364] duration metric: took 2m34.676043713s to acquireMachinesLock for "embed-certs-823470"
	I0819 13:20:15.705034 2153523 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:15.705058 2153523 fix.go:54] fixHost starting: 
	I0819 13:20:15.705598 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:15.705649 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:15.726423 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0819 13:20:15.726876 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:15.727565 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:20:15.727603 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:15.727986 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:15.728273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:15.728457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:20:15.730123 2153523 fix.go:112] recreateIfNeeded on embed-certs-823470: state=Stopped err=<nil>
	I0819 13:20:15.730167 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	W0819 13:20:15.730354 2153523 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:15.732004 2153523 out.go:177] * Restarting existing kvm2 VM for "embed-certs-823470" ...
	I0819 13:20:11.621327 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:11.621751 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:11.621783 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:11.621716 2154107 retry.go:31] will retry after 2.756598641s: waiting for machine to come up
	I0819 13:20:14.379562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380111 2152780 main.go:141] libmachine: (old-k8s-version-620467) Found IP for machine: 192.168.39.238
	I0819 13:20:14.380135 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserving static IP address...
	I0819 13:20:14.380149 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has current primary IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380616 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.380648 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserved static IP address: 192.168.39.238
	I0819 13:20:14.380673 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | skip adding static IP to network mk-old-k8s-version-620467 - found existing host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"}
	I0819 13:20:14.380687 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Getting to WaitForSSH function...
	I0819 13:20:14.380699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting for SSH to be available...
	I0819 13:20:14.383088 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383484 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.383516 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383650 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH client type: external
	I0819 13:20:14.383682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa (-rw-------)
	I0819 13:20:14.383720 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:14.383731 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | About to run SSH command:
	I0819 13:20:14.383745 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | exit 0
	I0819 13:20:14.508402 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:14.508766 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetConfigRaw
	I0819 13:20:14.509427 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.512053 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.512573 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512759 2152780 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/config.json ...
	I0819 13:20:14.512944 2152780 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:14.512962 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:14.513159 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.515266 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515554 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.515588 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515743 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.515901 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516039 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516142 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.516308 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.516582 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.516597 2152780 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:14.628611 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:14.628650 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.628968 2152780 buildroot.go:166] provisioning hostname "old-k8s-version-620467"
	I0819 13:20:14.629003 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.629202 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.631844 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632148 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.632211 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632516 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.632706 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632870 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632975 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.633140 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.633383 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.633400 2152780 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-620467 && echo "old-k8s-version-620467" | sudo tee /etc/hostname
	I0819 13:20:14.759803 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-620467
	
	I0819 13:20:14.759840 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.763058 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763513 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.763553 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763807 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.764027 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764208 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764523 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.764729 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.764986 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.765013 2152780 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-620467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-620467/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-620467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:14.890791 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:14.890828 2152780 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:14.890861 2152780 buildroot.go:174] setting up certificates
	I0819 13:20:14.890877 2152780 provision.go:84] configureAuth start
	I0819 13:20:14.890890 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.891337 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.894171 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894527 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.894562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894695 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.897240 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.897791 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.897825 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.898011 2152780 provision.go:143] copyHostCerts
	I0819 13:20:14.898080 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:14.898104 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:14.898169 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:14.898317 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:14.898330 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:14.898360 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:14.898434 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:14.898444 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:14.898472 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:14.898542 2152780 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-620467 san=[127.0.0.1 192.168.39.238 localhost minikube old-k8s-version-620467]
	I0819 13:20:15.009796 2152780 provision.go:177] copyRemoteCerts
	I0819 13:20:15.009880 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:15.009931 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.012857 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013153 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.013187 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013486 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.013699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.013862 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.013990 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.104066 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:15.131682 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 13:20:15.158717 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:15.186486 2152780 provision.go:87] duration metric: took 295.591794ms to configureAuth
	I0819 13:20:15.186518 2152780 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:15.186739 2152780 config.go:182] Loaded profile config "old-k8s-version-620467": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 13:20:15.186833 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.189580 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.189997 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.190028 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.190212 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.190389 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190528 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190646 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.190784 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.190958 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.190973 2152780 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:15.461836 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:15.461866 2152780 machine.go:96] duration metric: took 948.910153ms to provisionDockerMachine
	I0819 13:20:15.461878 2152780 start.go:293] postStartSetup for "old-k8s-version-620467" (driver="kvm2")
	I0819 13:20:15.461894 2152780 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:15.461913 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.462313 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:15.462348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.465294 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465707 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.465743 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465949 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.466136 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.466302 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.466443 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.551428 2152780 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:15.556041 2152780 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:15.556063 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:15.556137 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:15.556267 2152780 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:15.556410 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:15.565971 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:15.592103 2152780 start.go:296] duration metric: took 130.206992ms for postStartSetup
	I0819 13:20:15.592153 2152780 fix.go:56] duration metric: took 18.790655174s for fixHost
	I0819 13:20:15.592201 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.595223 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595583 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.595623 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595769 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.596033 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596249 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596391 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.596578 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.596767 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.596780 2152780 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:15.704773 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073615.677156491
	
	I0819 13:20:15.704802 2152780 fix.go:216] guest clock: 1724073615.677156491
	I0819 13:20:15.704811 2152780 fix.go:229] Guest: 2024-08-19 13:20:15.677156491 +0000 UTC Remote: 2024-08-19 13:20:15.592158206 +0000 UTC m=+209.822213223 (delta=84.998285ms)
	I0819 13:20:15.704854 2152780 fix.go:200] guest clock delta is within tolerance: 84.998285ms
	I0819 13:20:15.704862 2152780 start.go:83] releasing machines lock for "old-k8s-version-620467", held for 18.903410478s
	I0819 13:20:15.704893 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.705211 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:15.708190 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708528 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.708558 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708733 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709538 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709631 2152780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:15.709682 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.709793 2152780 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:15.709821 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.712366 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712759 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.712787 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712818 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712943 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713113 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713309 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.713374 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.713407 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.713481 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.713573 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713714 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713872 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.714033 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.732993 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Start
	I0819 13:20:15.733180 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring networks are active...
	I0819 13:20:15.734170 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network default is active
	I0819 13:20:15.734566 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network mk-embed-certs-823470 is active
	I0819 13:20:15.734992 2153523 main.go:141] libmachine: (embed-certs-823470) Getting domain xml...
	I0819 13:20:15.735675 2153523 main.go:141] libmachine: (embed-certs-823470) Creating domain...
	I0819 13:20:15.818839 2152780 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:15.827091 2152780 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:15.989552 2152780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:15.995991 2152780 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:15.996067 2152780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:16.012594 2152780 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:16.012622 2152780 start.go:495] detecting cgroup driver to use...
	I0819 13:20:16.012691 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:16.030071 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:16.048651 2152780 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:16.048715 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:16.063500 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:16.078370 2152780 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:16.194353 2152780 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:16.347921 2152780 docker.go:233] disabling docker service ...
	I0819 13:20:16.347992 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:16.363528 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:16.377515 2152780 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:16.526660 2152780 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:16.672207 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:16.686323 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:16.707555 2152780 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 13:20:16.707634 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.718679 2152780 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:16.718775 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.729446 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.740145 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.751813 2152780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:16.767680 2152780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:16.779680 2152780 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:16.779754 2152780 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:16.798205 2152780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:16.812684 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:16.999666 2152780 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:17.159585 2152780 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:17.159673 2152780 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:17.165707 2152780 start.go:563] Will wait 60s for crictl version
	I0819 13:20:17.165781 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:17.170949 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:17.222170 2152780 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:17.222248 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.250627 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.278578 2152780 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 13:20:13.644589 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:15.645507 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:18.146247 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:17.279632 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:17.282855 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:17.283289 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283501 2152780 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:17.288251 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:17.301778 2152780 kubeadm.go:883] updating cluster {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:17.301950 2152780 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 13:20:17.302022 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:17.348389 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:17.348477 2152780 ssh_runner.go:195] Run: which lz4
	I0819 13:20:17.352759 2152780 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:17.357046 2152780 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:17.357075 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 13:20:19.106587 2152780 crio.go:462] duration metric: took 1.753868264s to copy over tarball
	I0819 13:20:19.106686 2152780 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:17.131501 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting to get IP...
	I0819 13:20:17.132786 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.133215 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.133322 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.133191 2154333 retry.go:31] will retry after 299.991818ms: waiting for machine to come up
	I0819 13:20:17.434764 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.435181 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.435207 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.435148 2154333 retry.go:31] will retry after 236.856343ms: waiting for machine to come up
	I0819 13:20:17.673895 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.674570 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.674599 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.674464 2154333 retry.go:31] will retry after 393.585081ms: waiting for machine to come up
	I0819 13:20:18.070571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.071042 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.071102 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.070993 2154333 retry.go:31] will retry after 544.808617ms: waiting for machine to come up
	I0819 13:20:18.617933 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.618430 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.618462 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.618383 2154333 retry.go:31] will retry after 679.633919ms: waiting for machine to come up
	I0819 13:20:19.299719 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.300300 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.300332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.300244 2154333 retry.go:31] will retry after 612.319499ms: waiting for machine to come up
	I0819 13:20:19.914955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.915514 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.915550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.915448 2154333 retry.go:31] will retry after 831.709673ms: waiting for machine to come up
	I0819 13:20:20.748907 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:20.749441 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:20.749470 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:20.749386 2154333 retry.go:31] will retry after 1.039050841s: waiting for machine to come up
	I0819 13:20:19.147118 2152365 node_ready.go:49] node "default-k8s-diff-port-038572" has status "Ready":"True"
	I0819 13:20:19.147148 2152365 node_ready.go:38] duration metric: took 7.505954816s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:19.147159 2152365 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:19.156331 2152365 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.164963 2152365 pod_ready.go:93] pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:19.164994 2152365 pod_ready.go:82] duration metric: took 8.624394ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.165007 2152365 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:21.171762 2152365 pod_ready.go:103] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:22.814104 2152365 pod_ready.go:93] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.814140 2152365 pod_ready.go:82] duration metric: took 3.649123155s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.814157 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820468 2152365 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.820494 2152365 pod_ready.go:82] duration metric: took 6.328831ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820505 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826256 2152365 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.826288 2152365 pod_ready.go:82] duration metric: took 5.775237ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826302 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832133 2152365 pod_ready.go:93] pod "kube-proxy-wnkk2" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.832182 2152365 pod_ready.go:82] duration metric: took 5.850742ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832198 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837910 2152365 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.837938 2152365 pod_ready.go:82] duration metric: took 5.730926ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837952 2152365 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.098419 2152780 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.991694091s)
	I0819 13:20:22.098466 2152780 crio.go:469] duration metric: took 2.991838198s to extract the tarball
	I0819 13:20:22.098476 2152780 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:22.143215 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:22.183884 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:22.183917 2152780 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:22.183992 2152780 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.184012 2152780 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.184040 2152780 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 13:20:22.184050 2152780 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.183999 2152780 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.184002 2152780 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.184087 2152780 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.184133 2152780 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185819 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.185861 2152780 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 13:20:22.185868 2152780 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.185909 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.185940 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185825 2152780 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.355834 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.356969 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.361175 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.382402 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.393972 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.417644 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 13:20:22.451269 2152780 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 13:20:22.451335 2152780 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.451333 2152780 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 13:20:22.451374 2152780 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.451393 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.451428 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.479570 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.490369 2152780 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 13:20:22.490426 2152780 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.490484 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.519930 2152780 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 13:20:22.519990 2152780 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.520062 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.527794 2152780 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 13:20:22.527836 2152780 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.527922 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540287 2152780 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 13:20:22.540337 2152780 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 13:20:22.540362 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.540381 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540423 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665204 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.665271 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.665317 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.665410 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665441 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.665481 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.725932 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.821227 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.825977 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.826179 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.826276 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.826365 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.826517 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.908181 2152780 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 13:20:22.908249 2152780 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.908306 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.974136 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:23.008192 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 13:20:23.008282 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:23.008318 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:23.008372 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 13:20:23.008406 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:23.008416 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.041323 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 13:20:23.119173 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 13:20:23.119212 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.119216 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 13:20:23.119787 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 13:20:23.158596 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.193414 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 13:20:23.193510 2152780 cache_images.go:92] duration metric: took 1.009576721s to LoadCachedImages
	W0819 13:20:23.193613 2152780 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0819 13:20:23.193632 2152780 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.20.0 crio true true} ...
	I0819 13:20:23.193746 2152780 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-620467 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:23.193835 2152780 ssh_runner.go:195] Run: crio config
	I0819 13:20:23.255630 2152780 cni.go:84] Creating CNI manager for ""
	I0819 13:20:23.255661 2152780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:23.255677 2152780 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:23.255711 2152780 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-620467 NodeName:old-k8s-version-620467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 13:20:23.255897 2152780 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-620467"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:23.255980 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 13:20:23.269531 2152780 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:23.269612 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:23.282542 2152780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0819 13:20:23.302755 2152780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:23.322417 2152780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 13:20:23.342984 2152780 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:23.347053 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:23.360095 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:23.497495 2152780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:23.517984 2152780 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467 for IP: 192.168.39.238
	I0819 13:20:23.518059 2152780 certs.go:194] generating shared ca certs ...
	I0819 13:20:23.518125 2152780 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:23.518359 2152780 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:23.518429 2152780 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:23.518442 2152780 certs.go:256] generating profile certs ...
	I0819 13:20:23.518572 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.key
	I0819 13:20:23.518651 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key.53723c14
	I0819 13:20:23.518701 2152780 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key
	I0819 13:20:23.518856 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:23.518895 2152780 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:23.518909 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:23.518941 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:23.518969 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:23.519009 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:23.519073 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:23.519828 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:23.574690 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:23.616732 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:23.659531 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:23.706724 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 13:20:23.742394 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:23.774793 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:23.801731 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:23.827344 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:23.855338 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:23.880700 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:23.906324 2152780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:23.923792 2152780 ssh_runner.go:195] Run: openssl version
	I0819 13:20:23.929722 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:23.940623 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946214 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946316 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.952580 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:23.963550 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:23.974639 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979429 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979515 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.985465 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:23.996822 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:24.008876 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013845 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013909 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.019736 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:24.032573 2152780 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:24.038620 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:24.046070 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:24.052573 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:24.059242 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:24.065487 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:24.072116 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:24.078756 2152780 kubeadm.go:392] StartCluster: {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:24.078878 2152780 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:24.078950 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.122078 2152780 cri.go:89] found id: ""
	I0819 13:20:24.122170 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:24.133909 2152780 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:24.133932 2152780 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:24.133989 2152780 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:24.144440 2152780 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:24.145449 2152780 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-620467" does not appear in /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:24.146113 2152780 kubeconfig.go:62] /home/jenkins/minikube-integration/19479-2086935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-620467" cluster setting kubeconfig missing "old-k8s-version-620467" context setting]
	I0819 13:20:24.147113 2152780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:24.226327 2152780 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:24.236890 2152780 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.238
	I0819 13:20:24.236922 2152780 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:24.236936 2152780 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:24.236978 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.274144 2152780 cri.go:89] found id: ""
	I0819 13:20:24.274220 2152780 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:24.291049 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:24.302678 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:24.302699 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:24.302746 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:24.312055 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:24.312122 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:24.321518 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:24.330507 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:24.330570 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:24.340192 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.349855 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:24.349914 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.359243 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:24.369020 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:24.369100 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:24.379184 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:24.389785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:24.524009 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.159266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.413924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.519529 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.635520 2152780 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:25.635627 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:21.790800 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:21.791370 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:21.791400 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:21.791324 2154333 retry.go:31] will retry after 1.360798876s: waiting for machine to come up
	I0819 13:20:23.153492 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:23.153913 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:23.153942 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:23.153859 2154333 retry.go:31] will retry after 2.180186803s: waiting for machine to come up
	I0819 13:20:25.337305 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:25.337871 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:25.337903 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:25.337719 2154333 retry.go:31] will retry after 1.933341809s: waiting for machine to come up
	I0819 13:20:24.846640 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:27.344995 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:26.135764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:26.636618 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.136024 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.636046 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.135975 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.636411 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.136478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.636585 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.136183 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.636717 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.273276 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:27.273766 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:27.273796 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:27.273706 2154333 retry.go:31] will retry after 3.242792205s: waiting for machine to come up
	I0819 13:20:30.520205 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:30.520644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:30.520679 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:30.520585 2154333 retry.go:31] will retry after 2.921954134s: waiting for machine to come up
	I0819 13:20:29.843436 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:31.843702 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.021186 2151869 start.go:364] duration metric: took 51.034331815s to acquireMachinesLock for "no-preload-441627"
	I0819 13:20:35.021266 2151869 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:35.021275 2151869 fix.go:54] fixHost starting: 
	I0819 13:20:35.021747 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:35.021788 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:35.039025 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0819 13:20:35.039480 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:35.039951 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:20:35.039978 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:35.040398 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:35.040610 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:35.040779 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:20:35.042642 2151869 fix.go:112] recreateIfNeeded on no-preload-441627: state=Stopped err=<nil>
	I0819 13:20:35.042670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	W0819 13:20:35.042852 2151869 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:35.044620 2151869 out.go:177] * Restarting existing kvm2 VM for "no-preload-441627" ...
	I0819 13:20:31.136686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:31.636239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.136372 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.635927 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.136039 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.635721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.136156 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.636069 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.136557 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.636478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.446149 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.446796 2153523 main.go:141] libmachine: (embed-certs-823470) Found IP for machine: 192.168.61.50
	I0819 13:20:33.446825 2153523 main.go:141] libmachine: (embed-certs-823470) Reserving static IP address...
	I0819 13:20:33.446843 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has current primary IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.447326 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.447357 2153523 main.go:141] libmachine: (embed-certs-823470) Reserved static IP address: 192.168.61.50
	I0819 13:20:33.447375 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | skip adding static IP to network mk-embed-certs-823470 - found existing host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"}
	I0819 13:20:33.447390 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Getting to WaitForSSH function...
	I0819 13:20:33.447402 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting for SSH to be available...
	I0819 13:20:33.449644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.449934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.449966 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.450063 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH client type: external
	I0819 13:20:33.450080 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa (-rw-------)
	I0819 13:20:33.450128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:33.450148 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | About to run SSH command:
	I0819 13:20:33.450160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | exit 0
	I0819 13:20:33.580225 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:33.580660 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetConfigRaw
	I0819 13:20:33.581310 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.583950 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584345 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.584382 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584697 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:20:33.584932 2153523 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:33.584961 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:33.585138 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.587429 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587735 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.587768 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587903 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.588123 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588333 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588507 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.588675 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.588873 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.588886 2153523 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:33.696497 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:33.696528 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696770 2153523 buildroot.go:166] provisioning hostname "embed-certs-823470"
	I0819 13:20:33.696788 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.699605 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.699955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.699991 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.700129 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.700374 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700531 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700668 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.700822 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.701007 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.701021 2153523 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-823470 && echo "embed-certs-823470" | sudo tee /etc/hostname
	I0819 13:20:33.826398 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-823470
	
	I0819 13:20:33.826444 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.829353 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.829751 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.829791 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.830030 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.830215 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830383 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830504 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.830662 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.830879 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.830896 2153523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-823470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-823470/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-823470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:33.952843 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:33.952881 2153523 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:33.952908 2153523 buildroot.go:174] setting up certificates
	I0819 13:20:33.952920 2153523 provision.go:84] configureAuth start
	I0819 13:20:33.952930 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.953214 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.955916 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956302 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.956332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956520 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.958787 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959101 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.959128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959257 2153523 provision.go:143] copyHostCerts
	I0819 13:20:33.959321 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:33.959341 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:33.959391 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:33.959505 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:33.959516 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:33.959537 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:33.959593 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:33.959600 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:33.959618 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:33.959670 2153523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.embed-certs-823470 san=[127.0.0.1 192.168.61.50 embed-certs-823470 localhost minikube]
	I0819 13:20:34.309596 2153523 provision.go:177] copyRemoteCerts
	I0819 13:20:34.309658 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:34.309693 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.312341 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312722 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.312757 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312897 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.313100 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.313221 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.313349 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.400305 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:34.423939 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:34.447427 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:34.474337 2153523 provision.go:87] duration metric: took 521.386141ms to configureAuth
	I0819 13:20:34.474374 2153523 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:34.474559 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:34.474655 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.477282 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477721 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.477754 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.478108 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478296 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478443 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.478661 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.478864 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.478886 2153523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:34.768186 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:34.768232 2153523 machine.go:96] duration metric: took 1.183282981s to provisionDockerMachine
	I0819 13:20:34.768244 2153523 start.go:293] postStartSetup for "embed-certs-823470" (driver="kvm2")
	I0819 13:20:34.768255 2153523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:34.768273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:34.768593 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:34.768626 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.771022 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771451 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.771485 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771636 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.771820 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.771979 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.772095 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.862868 2153523 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:34.867619 2153523 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:34.867659 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:34.867746 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:34.867837 2153523 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:34.867931 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:34.877409 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:34.907227 2153523 start.go:296] duration metric: took 138.963836ms for postStartSetup
	I0819 13:20:34.907285 2153523 fix.go:56] duration metric: took 19.202241044s for fixHost
	I0819 13:20:34.907315 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.909900 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910238 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.910273 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910466 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.910671 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910832 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910952 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.911132 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.911306 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.911316 2153523 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:35.020989 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073634.971972974
	
	I0819 13:20:35.021036 2153523 fix.go:216] guest clock: 1724073634.971972974
	I0819 13:20:35.021046 2153523 fix.go:229] Guest: 2024-08-19 13:20:34.971972974 +0000 UTC Remote: 2024-08-19 13:20:34.907292264 +0000 UTC m=+174.021390970 (delta=64.68071ms)
	I0819 13:20:35.021076 2153523 fix.go:200] guest clock delta is within tolerance: 64.68071ms
	I0819 13:20:35.021082 2153523 start.go:83] releasing machines lock for "embed-certs-823470", held for 19.316080353s
	I0819 13:20:35.021111 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.021414 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:35.024146 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024613 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.024651 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024793 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025291 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025478 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025561 2153523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:35.025612 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.025736 2153523 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:35.025759 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.028550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028973 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029003 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029031 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029062 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029272 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029359 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029551 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029597 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029723 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.029741 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029893 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.137494 2153523 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:35.145726 2153523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:35.296420 2153523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:35.303481 2153523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:35.303566 2153523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:35.323781 2153523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:35.323813 2153523 start.go:495] detecting cgroup driver to use...
	I0819 13:20:35.323893 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:35.343648 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:35.357864 2153523 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:35.357920 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:35.371911 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:35.388107 2153523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:35.496871 2153523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:35.635989 2153523 docker.go:233] disabling docker service ...
	I0819 13:20:35.636061 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:35.651531 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:35.665374 2153523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:35.810969 2153523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:35.045992 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Start
	I0819 13:20:35.046198 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring networks are active...
	I0819 13:20:35.046943 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network default is active
	I0819 13:20:35.047261 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network mk-no-preload-441627 is active
	I0819 13:20:35.047846 2151869 main.go:141] libmachine: (no-preload-441627) Getting domain xml...
	I0819 13:20:35.048719 2151869 main.go:141] libmachine: (no-preload-441627) Creating domain...
	I0819 13:20:36.329569 2151869 main.go:141] libmachine: (no-preload-441627) Waiting to get IP...
	I0819 13:20:36.330479 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.330862 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.330953 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.330842 2154491 retry.go:31] will retry after 214.344142ms: waiting for machine to come up
	I0819 13:20:35.931109 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:35.945746 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:35.963978 2153523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:35.964062 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.973860 2153523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:35.973944 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.983833 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.993733 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.003724 2153523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:36.014327 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.024944 2153523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.043960 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.054444 2153523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:36.064239 2153523 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:36.064325 2153523 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:36.082508 2153523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:36.092677 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:36.216592 2153523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:36.368516 2153523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:36.368599 2153523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:36.373579 2153523 start.go:563] Will wait 60s for crictl version
	I0819 13:20:36.373630 2153523 ssh_runner.go:195] Run: which crictl
	I0819 13:20:36.377433 2153523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:36.416842 2153523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:36.416933 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.445307 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.480522 2153523 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:33.845335 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.847273 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:36.135939 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.136635 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.635748 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.136679 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.636322 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.136560 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.635772 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.135797 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.635764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.481770 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:36.485210 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485665 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:36.485697 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485956 2153523 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:36.490383 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:36.504020 2153523 kubeadm.go:883] updating cluster {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:36.504151 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:36.504236 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:36.540784 2153523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:36.540871 2153523 ssh_runner.go:195] Run: which lz4
	I0819 13:20:36.545693 2153523 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:36.550278 2153523 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:36.550307 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:20:38.008380 2153523 crio.go:462] duration metric: took 1.462725002s to copy over tarball
	I0819 13:20:38.008468 2153523 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:40.267095 2153523 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258591648s)
	I0819 13:20:40.267126 2153523 crio.go:469] duration metric: took 2.258713555s to extract the tarball
	I0819 13:20:40.267133 2153523 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:40.307080 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:40.350343 2153523 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:40.350368 2153523 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:40.350377 2153523 kubeadm.go:934] updating node { 192.168.61.50 8443 v1.31.0 crio true true} ...
	I0819 13:20:40.350504 2153523 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-823470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:40.350594 2153523 ssh_runner.go:195] Run: crio config
	I0819 13:20:40.397534 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:40.397560 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:40.397582 2153523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:40.397612 2153523 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.50 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-823470 NodeName:embed-certs-823470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:40.397804 2153523 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-823470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:40.397880 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:40.409664 2153523 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:40.409738 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:40.420526 2153523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:20:40.437016 2153523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:40.454222 2153523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 13:20:40.471532 2153523 ssh_runner.go:195] Run: grep 192.168.61.50	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:40.475436 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:40.487570 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:40.612612 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:40.630016 2153523 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470 for IP: 192.168.61.50
	I0819 13:20:40.630049 2153523 certs.go:194] generating shared ca certs ...
	I0819 13:20:40.630073 2153523 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:40.630265 2153523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:40.630329 2153523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:40.630343 2153523 certs.go:256] generating profile certs ...
	I0819 13:20:40.630457 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/client.key
	I0819 13:20:40.630553 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key.208ee791
	I0819 13:20:40.630606 2153523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key
	I0819 13:20:40.630756 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:40.630812 2153523 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:40.630824 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:40.630859 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:40.630891 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:40.630915 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:40.630959 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:40.631651 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:40.660149 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:40.699120 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:40.726336 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:40.765233 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 13:20:40.804970 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 13:20:40.846737 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:40.871358 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:40.896688 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:40.920071 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:36.547335 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.547819 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.547855 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.547769 2154491 retry.go:31] will retry after 344.710336ms: waiting for machine to come up
	I0819 13:20:36.894655 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.895213 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.895249 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.895169 2154491 retry.go:31] will retry after 301.307077ms: waiting for machine to come up
	I0819 13:20:37.197664 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.198250 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.198286 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.198182 2154491 retry.go:31] will retry after 516.927721ms: waiting for machine to come up
	I0819 13:20:37.716831 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.717323 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.717359 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.717279 2154491 retry.go:31] will retry after 679.982986ms: waiting for machine to come up
	I0819 13:20:38.398678 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:38.399120 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:38.399157 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:38.399072 2154491 retry.go:31] will retry after 822.30179ms: waiting for machine to come up
	I0819 13:20:39.223045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:39.223575 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:39.223609 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:39.223519 2154491 retry.go:31] will retry after 936.040478ms: waiting for machine to come up
	I0819 13:20:40.161047 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:40.161593 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:40.161624 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:40.161538 2154491 retry.go:31] will retry after 1.301988289s: waiting for machine to come up
	I0819 13:20:38.347577 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:40.845179 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:42.845598 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:41.136326 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:41.635949 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.136274 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.636124 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.135998 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.636101 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.136242 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.635713 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.135796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.635702 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.943162 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:40.969648 2153523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:40.988367 2153523 ssh_runner.go:195] Run: openssl version
	I0819 13:20:40.994353 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:41.006057 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010833 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010908 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.016665 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:41.029047 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:41.040384 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044690 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044755 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.050358 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:41.061626 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:41.073480 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077867 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077941 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.083685 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:41.096051 2153523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:41.100589 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:41.106791 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:41.112757 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:41.119083 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:41.125060 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:41.130528 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:41.136615 2153523 kubeadm.go:392] StartCluster: {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:41.136781 2153523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:41.136848 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.184735 2153523 cri.go:89] found id: ""
	I0819 13:20:41.184820 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:41.195134 2153523 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:41.195162 2153523 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:41.195215 2153523 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:41.205105 2153523 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:41.206197 2153523 kubeconfig.go:125] found "embed-certs-823470" server: "https://192.168.61.50:8443"
	I0819 13:20:41.208434 2153523 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:41.218028 2153523 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.50
	I0819 13:20:41.218059 2153523 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:41.218073 2153523 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:41.218128 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.256254 2153523 cri.go:89] found id: ""
	I0819 13:20:41.256344 2153523 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:41.273223 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:41.283149 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:41.283169 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:41.283228 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:41.292541 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:41.292624 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:41.302453 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:41.312032 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:41.312090 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:41.322411 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.331738 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:41.331817 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.342148 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:41.353410 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:41.353506 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:41.365035 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:41.378556 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:41.502298 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.511293 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.008951961s)
	I0819 13:20:42.511332 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.728318 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.806475 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.889810 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:42.889917 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.390790 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.890775 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.939414 2153523 api_server.go:72] duration metric: took 1.049613567s to wait for apiserver process to appear ...
	I0819 13:20:43.939456 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:43.939495 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:43.940151 2153523 api_server.go:269] stopped: https://192.168.61.50:8443/healthz: Get "https://192.168.61.50:8443/healthz": dial tcp 192.168.61.50:8443: connect: connection refused
	I0819 13:20:44.439965 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:41.465077 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:41.465533 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:41.465577 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:41.465512 2154491 retry.go:31] will retry after 1.34269663s: waiting for machine to come up
	I0819 13:20:42.810485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:42.810988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:42.811017 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:42.810935 2154491 retry.go:31] will retry after 2.118455014s: waiting for machine to come up
	I0819 13:20:44.930721 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:44.931183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:44.931225 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:44.931151 2154491 retry.go:31] will retry after 2.224941461s: waiting for machine to come up
	I0819 13:20:44.846237 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:47.344303 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:46.957397 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.957427 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:46.957459 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:46.997895 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.997932 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:47.440580 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.444945 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.444984 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:47.939553 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.946543 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.946576 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.440206 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.446075 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.446114 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.939602 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.944480 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.944526 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.440053 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.444461 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.444502 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.940036 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.944425 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.944464 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:50.440006 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:50.445299 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:20:50.451900 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:50.451930 2153523 api_server.go:131] duration metric: took 6.512455813s to wait for apiserver health ...
	I0819 13:20:50.451941 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:50.451948 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:50.453835 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:46.135706 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:46.635815 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.136558 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.636410 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.136300 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.636143 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.136463 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.635682 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.135692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.635841 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.454977 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:50.466694 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:50.484966 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:50.507697 2153523 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:50.507738 2153523 system_pods.go:61] "coredns-6f6b679f8f-rxjvv" [7eae7e32-b4c3-4941-9241-16cdde1007fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:50.507750 2153523 system_pods.go:61] "etcd-embed-certs-823470" [e2684e0e-8c4a-40da-ae5b-aa7f3a077e45] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:50.507761 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [c438fbda-c1f4-49f8-8979-1194d91cdeab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:50.507770 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [528e4b3e-b3fd-4455-bb4f-79b4bbdcb434] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:50.507778 2153523 system_pods.go:61] "kube-proxy-jcnwd" [b5302230-969d-409e-8aba-98ff971e9e79] Running
	I0819 13:20:50.507787 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [213067c8-73c6-4eb8-b9dc-f4ea981421f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:50.507798 2153523 system_pods.go:61] "metrics-server-6867b74b74-8dwpf" [7e3ab377-075e-48d8-a9ce-191ba41e3ed7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:50.507812 2153523 system_pods.go:61] "storage-provisioner" [4dd080d9-174f-4c0c-9b06-39e98a71672b] Running
	I0819 13:20:50.507821 2153523 system_pods.go:74] duration metric: took 22.833545ms to wait for pod list to return data ...
	I0819 13:20:50.507831 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:50.512822 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:50.512850 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:50.512862 2153523 node_conditions.go:105] duration metric: took 5.023718ms to run NodePressure ...
	I0819 13:20:50.512877 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:50.853595 2153523 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859172 2153523 kubeadm.go:739] kubelet initialised
	I0819 13:20:50.859200 2153523 kubeadm.go:740] duration metric: took 5.576224ms waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859211 2153523 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:50.865571 2153523 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.871631 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871652 2153523 pod_ready.go:82] duration metric: took 6.059239ms for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.871660 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871666 2153523 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.875242 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875268 2153523 pod_ready.go:82] duration metric: took 3.594665ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.875279 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875287 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.879429 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879460 2153523 pod_ready.go:82] duration metric: took 4.156713ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.879471 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879485 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.887625 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887647 2153523 pod_ready.go:82] duration metric: took 8.149859ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.887655 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887662 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:47.157196 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:47.157589 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:47.157616 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:47.157532 2154491 retry.go:31] will retry after 3.412547504s: waiting for machine to come up
	I0819 13:20:50.571896 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:50.572460 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:50.572485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:50.572401 2154491 retry.go:31] will retry after 3.498419112s: waiting for machine to come up
	I0819 13:20:49.345181 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.346188 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.136307 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.136425 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.636034 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.136443 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.636129 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.135695 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.635860 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.136481 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.635655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.287880 2153523 pod_ready.go:93] pod "kube-proxy-jcnwd" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:51.287908 2153523 pod_ready.go:82] duration metric: took 400.238614ms for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:51.287922 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:53.294631 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.803650 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:54.073302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073795 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has current primary IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073819 2151869 main.go:141] libmachine: (no-preload-441627) Found IP for machine: 192.168.50.161
	I0819 13:20:54.073833 2151869 main.go:141] libmachine: (no-preload-441627) Reserving static IP address...
	I0819 13:20:54.074302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.074337 2151869 main.go:141] libmachine: (no-preload-441627) Reserved static IP address: 192.168.50.161
	I0819 13:20:54.074356 2151869 main.go:141] libmachine: (no-preload-441627) DBG | skip adding static IP to network mk-no-preload-441627 - found existing host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"}
	I0819 13:20:54.074382 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Getting to WaitForSSH function...
	I0819 13:20:54.074398 2151869 main.go:141] libmachine: (no-preload-441627) Waiting for SSH to be available...
	I0819 13:20:54.076312 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076686 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.076719 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076879 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH client type: external
	I0819 13:20:54.076920 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa (-rw-------)
	I0819 13:20:54.076961 2151869 main.go:141] libmachine: (no-preload-441627) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:54.076975 2151869 main.go:141] libmachine: (no-preload-441627) DBG | About to run SSH command:
	I0819 13:20:54.076988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | exit 0
	I0819 13:20:54.208334 2151869 main.go:141] libmachine: (no-preload-441627) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:54.208710 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetConfigRaw
	I0819 13:20:54.209373 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.211827 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212176 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.212210 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212409 2151869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/config.json ...
	I0819 13:20:54.212587 2151869 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:54.212607 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:54.212810 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.214965 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215290 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.215315 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215435 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.215615 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215735 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215887 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.216050 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.216244 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.216256 2151869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:54.332379 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:54.332411 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332718 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:20:54.332753 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332948 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.335520 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.335930 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.335959 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.336119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.336357 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336513 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336654 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.336918 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.337084 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.337104 2151869 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-441627 && echo "no-preload-441627" | sudo tee /etc/hostname
	I0819 13:20:54.466127 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-441627
	
	I0819 13:20:54.466178 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.469045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469434 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.469482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469648 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.469836 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470018 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470174 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.470406 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.470654 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.470676 2151869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-441627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-441627/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-441627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:54.595063 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:54.595111 2151869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:54.595162 2151869 buildroot.go:174] setting up certificates
	I0819 13:20:54.595178 2151869 provision.go:84] configureAuth start
	I0819 13:20:54.595191 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.595554 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.598703 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599031 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.599060 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599202 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.601785 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602203 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.602231 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602593 2151869 provision.go:143] copyHostCerts
	I0819 13:20:54.602661 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:54.602680 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:54.602745 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:54.602878 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:54.602892 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:54.602922 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:54.603010 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:54.603022 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:54.603043 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:54.603116 2151869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.no-preload-441627 san=[127.0.0.1 192.168.50.161 localhost minikube no-preload-441627]
	I0819 13:20:55.027338 2151869 provision.go:177] copyRemoteCerts
	I0819 13:20:55.027408 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:55.027438 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.030073 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.030550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030826 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.031038 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.031231 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.031436 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.122187 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:55.147222 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:55.170341 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:55.195484 2151869 provision.go:87] duration metric: took 600.289644ms to configureAuth
	I0819 13:20:55.195516 2151869 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:55.195757 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:55.195862 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.198859 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199267 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.199294 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199448 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.199666 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199834 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199941 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.200126 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.200340 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.200357 2151869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:55.485890 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:55.485920 2151869 machine.go:96] duration metric: took 1.273319956s to provisionDockerMachine
	I0819 13:20:55.485931 2151869 start.go:293] postStartSetup for "no-preload-441627" (driver="kvm2")
	I0819 13:20:55.485941 2151869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:55.485957 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.486340 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:55.486379 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.489099 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489514 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.489550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489657 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.489866 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.490073 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.490298 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.582461 2151869 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:55.587827 2151869 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:55.587857 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:55.587928 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:55.588015 2151869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:55.588115 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:55.600309 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:55.629229 2151869 start.go:296] duration metric: took 143.280186ms for postStartSetup
	I0819 13:20:55.629281 2151869 fix.go:56] duration metric: took 20.608007215s for fixHost
	I0819 13:20:55.629306 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.632299 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632733 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.632786 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632921 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.633155 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633422 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633640 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.633850 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.634098 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.634117 2151869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:55.752998 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073655.727816487
	
	I0819 13:20:55.753028 2151869 fix.go:216] guest clock: 1724073655.727816487
	I0819 13:20:55.753039 2151869 fix.go:229] Guest: 2024-08-19 13:20:55.727816487 +0000 UTC Remote: 2024-08-19 13:20:55.629285878 +0000 UTC m=+354.230914724 (delta=98.530609ms)
	I0819 13:20:55.753080 2151869 fix.go:200] guest clock delta is within tolerance: 98.530609ms
	I0819 13:20:55.753087 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 20.731851858s
	I0819 13:20:55.753119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.753381 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:55.756443 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.756954 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.756986 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.757124 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757639 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757822 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757927 2151869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:55.757999 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.758066 2151869 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:55.758098 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.760779 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761023 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761162 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761338 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761459 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761515 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761756 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.761845 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761895 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.762041 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.762192 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.846241 2151869 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:55.868048 2151869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:56.013592 2151869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:56.020322 2151869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:56.020403 2151869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:56.045197 2151869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:56.045234 2151869 start.go:495] detecting cgroup driver to use...
	I0819 13:20:56.045325 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:56.062336 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:56.078399 2151869 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:56.078476 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:56.094735 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:56.110097 2151869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:56.238433 2151869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:56.405540 2151869 docker.go:233] disabling docker service ...
	I0819 13:20:56.405638 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:56.421604 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:56.434166 2151869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:56.565116 2151869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:56.688471 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:56.709293 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:56.727575 2151869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:56.727658 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.738492 2151869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:56.738559 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.748563 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.758372 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.768341 2151869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:56.778351 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.788072 2151869 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.805215 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.815496 2151869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:56.825033 2151869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:56.825087 2151869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:56.839732 2151869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:56.851166 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:56.976815 2151869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:57.154020 2151869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:57.154114 2151869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:57.159214 2151869 start.go:563] Will wait 60s for crictl version
	I0819 13:20:57.159282 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.163467 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:57.209305 2151869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:57.209417 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.239241 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.269905 2151869 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:53.844042 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.845906 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:56.135714 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:56.635996 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.135847 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.635970 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.136650 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.636316 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.136506 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.636057 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.136545 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.636030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.297660 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:57.297685 2153523 pod_ready.go:82] duration metric: took 6.009754619s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:57.297696 2153523 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:59.304626 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:57.270865 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:57.273851 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274158 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:57.274186 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274425 2151869 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:57.278843 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:57.296818 2151869 kubeadm.go:883] updating cluster {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:57.296988 2151869 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:57.297042 2151869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:57.345101 2151869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:57.345130 2151869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:57.345185 2151869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.345236 2151869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.345463 2151869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.345553 2151869 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.345586 2151869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.345482 2151869 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 13:20:57.345492 2151869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.345496 2151869 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.347612 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.347625 2151869 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.347661 2151869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.347698 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.347797 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.348298 2151869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.348418 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.348485 2151869 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 13:20:57.521291 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.528953 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.530354 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.538749 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.539434 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.564916 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 13:20:57.572522 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.596418 2151869 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 13:20:57.596483 2151869 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.596536 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.632436 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 13:20:57.690627 2151869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.690666 2151869 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 13:20:57.690725 2151869 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.690677 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690776 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 13:20:57.690827 2151869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.690856 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.711488 2151869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 13:20:57.711546 2151869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.711597 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842702 2151869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 13:20:57.842794 2151869 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 13:20:57.842819 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.842815 2151869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.842845 2151869 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.842917 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842923 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842933 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.842769 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.863031 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.959879 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.959965 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.960047 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.960061 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.960110 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.960298 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.994432 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.109438 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.124785 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:58.124845 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:58.126704 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:58.126791 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:58.126801 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:58.184360 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.198177 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.280314 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 13:20:58.280424 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.280471 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 13:20:58.280579 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:20:58.291181 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 13:20:58.291246 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 13:20:58.291278 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:20:58.291295 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 13:20:58.291333 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:20:58.291370 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:20:58.329433 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 13:20:58.329497 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 13:20:58.329553 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:20:58.329588 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 13:20:58.329597 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:20:58.329602 2151869 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329640 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329660 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 13:20:58.329679 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 13:20:58.329704 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 13:20:58.329747 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 13:20:58.333814 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 13:21:00.287189 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.957515939s)
	I0819 13:21:00.287247 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 13:21:00.287270 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287208 2151869 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.957596318s)
	I0819 13:21:00.287323 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287327 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 13:20:58.346708 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:00.844688 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.845156 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:01.136475 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.136128 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.636458 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.136471 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.636080 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.135958 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.636494 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.136279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.636642 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.803995 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:03.804110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:05.805252 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.037910 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.750558372s)
	I0819 13:21:02.037948 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 13:21:02.037977 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:02.038031 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:04.606941 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.568877079s)
	I0819 13:21:04.606974 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 13:21:04.607003 2151869 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:04.607057 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:05.346854 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.108643 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:06.136247 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:06.636648 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.136212 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.636710 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.136112 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.636655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.136349 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.635778 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.136689 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.635931 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.305329 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:10.805529 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.376612 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.769523227s)
	I0819 13:21:08.376661 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 13:21:08.376690 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:08.376749 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:10.337756 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.960979615s)
	I0819 13:21:10.337805 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 13:21:10.337835 2151869 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.337888 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.982386 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 13:21:10.982434 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.982483 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.345178 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.345444 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:11.135937 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:11.636273 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.136428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.635907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.135893 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.636189 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.136600 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.636483 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.135856 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.635957 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.303597 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:15.804612 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.334483 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351971561s)
	I0819 13:21:12.334524 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 13:21:12.334561 2151869 cache_images.go:123] Successfully loaded all cached images
	I0819 13:21:12.334568 2151869 cache_images.go:92] duration metric: took 14.989422621s to LoadCachedImages
	I0819 13:21:12.334580 2151869 kubeadm.go:934] updating node { 192.168.50.161 8443 v1.31.0 crio true true} ...
	I0819 13:21:12.334746 2151869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-441627 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:21:12.334860 2151869 ssh_runner.go:195] Run: crio config
	I0819 13:21:12.396639 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:12.396662 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:12.396674 2151869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:21:12.396703 2151869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.161 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-441627 NodeName:no-preload-441627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:21:12.396877 2151869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-441627"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:21:12.396958 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:21:12.408527 2151869 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:21:12.408601 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:21:12.418596 2151869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:21:12.435620 2151869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:21:12.452829 2151869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 13:21:12.470756 2151869 ssh_runner.go:195] Run: grep 192.168.50.161	control-plane.minikube.internal$ /etc/hosts
	I0819 13:21:12.474802 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:21:12.486795 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:21:12.612626 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:21:12.639005 2151869 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627 for IP: 192.168.50.161
	I0819 13:21:12.639027 2151869 certs.go:194] generating shared ca certs ...
	I0819 13:21:12.639051 2151869 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:21:12.639231 2151869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:21:12.639296 2151869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:21:12.639310 2151869 certs.go:256] generating profile certs ...
	I0819 13:21:12.639441 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/client.key
	I0819 13:21:12.639535 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key.72b03406
	I0819 13:21:12.639591 2151869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key
	I0819 13:21:12.639752 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:21:12.639811 2151869 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:21:12.639822 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:21:12.639849 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:21:12.639874 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:21:12.639899 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:21:12.639944 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:21:12.640741 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:21:12.674749 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:21:12.700466 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:21:12.726553 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:21:12.753171 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 13:21:12.779864 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:21:12.817715 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:21:12.841844 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:21:12.867122 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:21:12.896767 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:21:12.921671 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:21:12.948478 2151869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:21:12.967603 2151869 ssh_runner.go:195] Run: openssl version
	I0819 13:21:12.973848 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:21:12.984584 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989193 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989248 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.995006 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:21:13.005510 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:21:13.017919 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022858 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022918 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.029040 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:21:13.039654 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:21:13.049968 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054446 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054502 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.060941 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:21:13.071954 2151869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:21:13.076513 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:21:13.082624 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:21:13.088523 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:21:13.094597 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:21:13.100589 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:21:13.106674 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:21:13.112637 2151869 kubeadm.go:392] StartCluster: {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:21:13.112759 2151869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:21:13.112804 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.153143 2151869 cri.go:89] found id: ""
	I0819 13:21:13.153230 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:21:13.164365 2151869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:21:13.164389 2151869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:21:13.164444 2151869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:21:13.174609 2151869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:21:13.175874 2151869 kubeconfig.go:125] found "no-preload-441627" server: "https://192.168.50.161:8443"
	I0819 13:21:13.178228 2151869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:21:13.188495 2151869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.161
	I0819 13:21:13.188536 2151869 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:21:13.188562 2151869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:21:13.188623 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.225547 2151869 cri.go:89] found id: ""
	I0819 13:21:13.225635 2151869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:21:13.242484 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:21:13.251983 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:21:13.252021 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:21:13.252085 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:21:13.260912 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:21:13.260984 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:21:13.270535 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:21:13.279500 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:21:13.279549 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:21:13.288906 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.297861 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:21:13.297914 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.307468 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:21:13.316801 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:21:13.316859 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:21:13.325756 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:21:13.336403 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:13.448374 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.613650 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.165234261s)
	I0819 13:21:14.613681 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.841371 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.904812 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:15.017425 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:21:15.017541 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.518613 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.018036 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.061238 2151869 api_server.go:72] duration metric: took 1.043828546s to wait for apiserver process to appear ...
	I0819 13:21:16.061271 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:21:16.061301 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:14.846330 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:17.344355 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:19.219563 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:21:19.219607 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:21:19.219627 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.350613 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.350647 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:19.562010 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.567205 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.567236 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.062225 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.085272 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:20.085311 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.561434 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.565983 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:21:20.574754 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:21:20.574779 2151869 api_server.go:131] duration metric: took 4.513500352s to wait for apiserver health ...
	I0819 13:21:20.574790 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:20.574797 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:20.576255 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:21:16.136030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.636437 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.135718 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.635876 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.136668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.636021 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.135933 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.636638 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.135892 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.636450 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.804980 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.306777 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.577369 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:21:20.603169 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:21:20.642872 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:21:20.657328 2151869 system_pods.go:59] 8 kube-system pods found
	I0819 13:21:20.657366 2151869 system_pods.go:61] "coredns-6f6b679f8f-4w764" [486b4748-9a70-47b1-b386-e5369ebc2a96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:21:20.657373 2151869 system_pods.go:61] "etcd-no-preload-441627" [8013da47-1ecb-47de-a82c-7b72529b68fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:21:20.657383 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [72190b64-c312-48c0-8494-27e1ad39d877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:21:20.657389 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [60773a38-952f-4f53-a3f3-e54acdf7d196] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:21:20.657395 2151869 system_pods.go:61] "kube-proxy-v76kp" [ac6ad27b-5ee4-4899-8744-82537285a195] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:21:20.657400 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [78dde452-d8d5-498d-a6b1-459e6cb9ac16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:21:20.657405 2151869 system_pods.go:61] "metrics-server-6867b74b74-hd777" [2ce49c7d-af76-459a-82f5-aaa8d2b97c60] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:21:20.657410 2151869 system_pods.go:61] "storage-provisioner" [b67a7c83-46d3-4d75-bfa2-ae944b0c0ab8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:21:20.657417 2151869 system_pods.go:74] duration metric: took 14.518946ms to wait for pod list to return data ...
	I0819 13:21:20.657424 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:21:20.661337 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:21:20.661362 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:21:20.661373 2151869 node_conditions.go:105] duration metric: took 3.944656ms to run NodePressure ...
	I0819 13:21:20.661435 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:20.951411 2151869 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955734 2151869 kubeadm.go:739] kubelet initialised
	I0819 13:21:20.955755 2151869 kubeadm.go:740] duration metric: took 4.317107ms waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955763 2151869 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:21:20.961038 2151869 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.965450 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965473 2151869 pod_ready.go:82] duration metric: took 4.406672ms for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.965481 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965488 2151869 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.969586 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969610 2151869 pod_ready.go:82] duration metric: took 4.112423ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.969620 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969626 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.973542 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973566 2151869 pod_ready.go:82] duration metric: took 3.932252ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.973575 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973583 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.046350 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046390 2151869 pod_ready.go:82] duration metric: took 72.796575ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:21.046405 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046414 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:19.346285 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.346824 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.136149 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:21.635766 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.136148 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.135942 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.636570 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.135982 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.635967 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.136750 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.635977 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:25.636070 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:25.675163 2152780 cri.go:89] found id: ""
	I0819 13:21:25.675196 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.675208 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:25.675216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:25.675294 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:25.711676 2152780 cri.go:89] found id: ""
	I0819 13:21:25.711702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.711711 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:25.711717 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:25.711777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:25.748448 2152780 cri.go:89] found id: ""
	I0819 13:21:25.748479 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.748490 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:25.748499 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:25.748574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:25.786555 2152780 cri.go:89] found id: ""
	I0819 13:21:25.786584 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.786597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:25.786606 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:25.786661 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:22.804004 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:24.804708 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.447200 2151869 pod_ready.go:93] pod "kube-proxy-v76kp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:21.447230 2151869 pod_ready.go:82] duration metric: took 400.803851ms for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.447241 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:23.452136 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.454487 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:23.845593 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:26.345971 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.819620 2152780 cri.go:89] found id: ""
	I0819 13:21:25.819654 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.819665 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:25.819672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:25.819734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:25.853303 2152780 cri.go:89] found id: ""
	I0819 13:21:25.853333 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.853342 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:25.853349 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:25.853401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:25.886285 2152780 cri.go:89] found id: ""
	I0819 13:21:25.886315 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.886325 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:25.886332 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:25.886389 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:25.921320 2152780 cri.go:89] found id: ""
	I0819 13:21:25.921347 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.921356 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:25.921367 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:25.921377 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:25.975400 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:25.975486 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:25.989715 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:25.989741 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:26.115378 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:26.115402 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:26.115415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:26.192370 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:26.192415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:28.739640 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:28.752652 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:28.752729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:28.791883 2152780 cri.go:89] found id: ""
	I0819 13:21:28.791910 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.791918 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:28.791925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:28.791977 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:28.829635 2152780 cri.go:89] found id: ""
	I0819 13:21:28.829668 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.829678 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:28.829686 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:28.829756 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:28.866768 2152780 cri.go:89] found id: ""
	I0819 13:21:28.866804 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.866817 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:28.866825 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:28.866897 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:28.901921 2152780 cri.go:89] found id: ""
	I0819 13:21:28.901954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.901963 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:28.901970 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:28.902025 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:28.945421 2152780 cri.go:89] found id: ""
	I0819 13:21:28.945449 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.945458 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:28.945468 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:28.945535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:28.982857 2152780 cri.go:89] found id: ""
	I0819 13:21:28.982893 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.982905 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:28.982918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:28.982978 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:29.020378 2152780 cri.go:89] found id: ""
	I0819 13:21:29.020415 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.020427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:29.020436 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:29.020502 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:29.055782 2152780 cri.go:89] found id: ""
	I0819 13:21:29.055817 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.055829 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:29.055841 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:29.055864 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:29.107722 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:29.107761 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:29.122304 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:29.122364 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:29.193379 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:29.193415 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:29.193435 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:29.271539 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:29.271578 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:27.304409 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.306462 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:27.952426 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.954179 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:28.346305 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:30.845538 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:31.819829 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:31.832354 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:31.832427 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:31.873620 2152780 cri.go:89] found id: ""
	I0819 13:21:31.873647 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.873656 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:31.873662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:31.873722 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:31.905857 2152780 cri.go:89] found id: ""
	I0819 13:21:31.905891 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.905903 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:31.905911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:31.905993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:31.942187 2152780 cri.go:89] found id: ""
	I0819 13:21:31.942215 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.942227 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:31.942235 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:31.942307 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:31.981439 2152780 cri.go:89] found id: ""
	I0819 13:21:31.981475 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.981486 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:31.981494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:31.981569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:32.019863 2152780 cri.go:89] found id: ""
	I0819 13:21:32.019900 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.019922 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:32.019934 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:32.020000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:32.055007 2152780 cri.go:89] found id: ""
	I0819 13:21:32.055035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.055044 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:32.055050 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:32.055117 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:32.092770 2152780 cri.go:89] found id: ""
	I0819 13:21:32.092809 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.092818 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:32.092824 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:32.092880 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:32.127000 2152780 cri.go:89] found id: ""
	I0819 13:21:32.127035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.127043 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:32.127053 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:32.127067 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:32.179979 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:32.180026 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:32.195377 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:32.195412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:32.269864 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:32.269887 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:32.269902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:32.348035 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:32.348072 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:34.886647 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:34.900410 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:34.900483 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:34.938558 2152780 cri.go:89] found id: ""
	I0819 13:21:34.938588 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.938596 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:34.938603 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:34.938660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:34.975968 2152780 cri.go:89] found id: ""
	I0819 13:21:34.976001 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.976012 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:34.976020 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:34.976105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:35.008967 2152780 cri.go:89] found id: ""
	I0819 13:21:35.008995 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.009003 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:35.009010 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:35.009079 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:35.045918 2152780 cri.go:89] found id: ""
	I0819 13:21:35.045957 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.045966 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:35.045973 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:35.046038 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:35.090605 2152780 cri.go:89] found id: ""
	I0819 13:21:35.090640 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.090652 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:35.090667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:35.090740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:35.152032 2152780 cri.go:89] found id: ""
	I0819 13:21:35.152066 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.152079 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:35.152088 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:35.152182 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:35.197900 2152780 cri.go:89] found id: ""
	I0819 13:21:35.197932 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.197947 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:35.197956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:35.198026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:35.233818 2152780 cri.go:89] found id: ""
	I0819 13:21:35.233853 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.233867 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:35.233884 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:35.233902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:35.316457 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:35.316501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:35.356449 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:35.356480 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:35.409307 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:35.409347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:35.423776 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:35.423807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:35.500868 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:31.803587 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:33.803736 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.805282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:32.452623 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:34.454347 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.454772 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:35.454809 2151869 pod_ready.go:82] duration metric: took 14.007559429s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:35.454826 2151869 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:33.345607 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.347672 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.844185 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:38.001981 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:38.014908 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:38.014991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:38.051110 2152780 cri.go:89] found id: ""
	I0819 13:21:38.051141 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.051154 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:38.051162 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:38.051236 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:38.087476 2152780 cri.go:89] found id: ""
	I0819 13:21:38.087502 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.087511 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:38.087517 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:38.087575 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:38.122929 2152780 cri.go:89] found id: ""
	I0819 13:21:38.122961 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.122972 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:38.122980 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:38.123050 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:38.157352 2152780 cri.go:89] found id: ""
	I0819 13:21:38.157382 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.157394 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:38.157403 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:38.157468 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:38.194914 2152780 cri.go:89] found id: ""
	I0819 13:21:38.194953 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.194968 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:38.194974 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:38.195028 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:38.234036 2152780 cri.go:89] found id: ""
	I0819 13:21:38.234069 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.234078 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:38.234087 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:38.234155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:38.275759 2152780 cri.go:89] found id: ""
	I0819 13:21:38.275787 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.275798 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:38.275806 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:38.275867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:38.312718 2152780 cri.go:89] found id: ""
	I0819 13:21:38.312749 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.312759 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:38.312770 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:38.312785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:38.352032 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:38.352057 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:38.401877 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:38.401912 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:38.415899 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:38.415929 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:38.495576 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:38.495596 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:38.495609 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:38.304885 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.804828 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.461711 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:39.960883 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.343811 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.344371 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:41.073796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:41.095862 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:41.095935 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:41.155760 2152780 cri.go:89] found id: ""
	I0819 13:21:41.155790 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.155820 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:41.155828 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:41.155899 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:41.200108 2152780 cri.go:89] found id: ""
	I0819 13:21:41.200147 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.200177 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:41.200189 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:41.200263 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:41.233462 2152780 cri.go:89] found id: ""
	I0819 13:21:41.233494 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.233506 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:41.233514 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:41.233585 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:41.267089 2152780 cri.go:89] found id: ""
	I0819 13:21:41.267120 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.267133 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:41.267141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:41.267212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:41.302391 2152780 cri.go:89] found id: ""
	I0819 13:21:41.302423 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.302434 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:41.302443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:41.302509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:41.337812 2152780 cri.go:89] found id: ""
	I0819 13:21:41.337843 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.337855 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:41.337864 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:41.337939 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:41.373924 2152780 cri.go:89] found id: ""
	I0819 13:21:41.373954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.373961 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:41.373968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:41.374023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:41.407905 2152780 cri.go:89] found id: ""
	I0819 13:21:41.407950 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.407961 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:41.407974 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:41.407996 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:41.493650 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:41.493693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:41.532411 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:41.532456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:41.584613 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:41.584650 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:41.598337 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:41.598369 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:41.672154 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.172662 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:44.185663 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:44.185730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:44.220825 2152780 cri.go:89] found id: ""
	I0819 13:21:44.220863 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.220876 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:44.220885 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:44.220955 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:44.257261 2152780 cri.go:89] found id: ""
	I0819 13:21:44.257292 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.257300 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:44.257307 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:44.257369 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:44.292652 2152780 cri.go:89] found id: ""
	I0819 13:21:44.292690 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.292701 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:44.292709 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:44.292806 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:44.330064 2152780 cri.go:89] found id: ""
	I0819 13:21:44.330096 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.330108 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:44.330117 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:44.330184 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:44.366863 2152780 cri.go:89] found id: ""
	I0819 13:21:44.366895 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.366906 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:44.366915 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:44.366976 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:44.400335 2152780 cri.go:89] found id: ""
	I0819 13:21:44.400372 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.400385 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:44.400394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:44.400458 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:44.434381 2152780 cri.go:89] found id: ""
	I0819 13:21:44.434411 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.434422 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:44.434430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:44.434498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:44.470985 2152780 cri.go:89] found id: ""
	I0819 13:21:44.471013 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.471024 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:44.471036 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:44.471048 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:44.521552 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:44.521593 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:44.536197 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:44.536230 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:44.613040 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.613063 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:44.613077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:44.694600 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:44.694655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:42.807847 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:45.304916 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.461974 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.464590 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.843926 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.845069 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:47.231770 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:47.245480 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:47.245555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:47.282302 2152780 cri.go:89] found id: ""
	I0819 13:21:47.282334 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.282343 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:47.282351 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:47.282424 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:47.317791 2152780 cri.go:89] found id: ""
	I0819 13:21:47.317828 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.317839 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:47.317847 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:47.317914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:47.355999 2152780 cri.go:89] found id: ""
	I0819 13:21:47.356033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.356044 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:47.356052 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:47.356113 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:47.394652 2152780 cri.go:89] found id: ""
	I0819 13:21:47.394689 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.394702 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:47.394711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:47.394769 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:47.428563 2152780 cri.go:89] found id: ""
	I0819 13:21:47.428599 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.428612 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:47.428621 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:47.428698 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:47.462669 2152780 cri.go:89] found id: ""
	I0819 13:21:47.462702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.462712 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:47.462719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:47.462789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:47.496322 2152780 cri.go:89] found id: ""
	I0819 13:21:47.496349 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.496357 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:47.496366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:47.496419 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:47.527997 2152780 cri.go:89] found id: ""
	I0819 13:21:47.528033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.528045 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:47.528058 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:47.528077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:47.541514 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:47.541542 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:47.612804 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:47.612833 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:47.612851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.692815 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:47.692857 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:47.731057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:47.731092 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.286289 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:50.299376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:50.299457 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:50.335093 2152780 cri.go:89] found id: ""
	I0819 13:21:50.335123 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.335132 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:50.335138 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:50.335212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:50.369353 2152780 cri.go:89] found id: ""
	I0819 13:21:50.369378 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.369386 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:50.369394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:50.369459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:50.404319 2152780 cri.go:89] found id: ""
	I0819 13:21:50.404350 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.404360 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:50.404366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:50.404421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:50.441033 2152780 cri.go:89] found id: ""
	I0819 13:21:50.441064 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.441073 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:50.441080 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:50.441136 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:50.477438 2152780 cri.go:89] found id: ""
	I0819 13:21:50.477463 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.477471 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:50.477485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:50.477541 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:50.511785 2152780 cri.go:89] found id: ""
	I0819 13:21:50.511825 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.511838 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:50.511848 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:50.511925 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:50.545454 2152780 cri.go:89] found id: ""
	I0819 13:21:50.545487 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.545496 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:50.545502 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:50.545563 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:50.582022 2152780 cri.go:89] found id: ""
	I0819 13:21:50.582053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.582061 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:50.582071 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:50.582086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:50.625353 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:50.625384 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.688814 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:50.688851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:50.704225 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:50.704254 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:50.782736 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:50.782771 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:50.782789 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.804084 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.804223 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.961189 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:48.962466 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.344808 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.347615 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.365075 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:53.380049 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:53.380135 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:53.418911 2152780 cri.go:89] found id: ""
	I0819 13:21:53.418944 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.418956 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:53.418964 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:53.419020 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:53.458952 2152780 cri.go:89] found id: ""
	I0819 13:21:53.458986 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.458997 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:53.459005 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:53.459080 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:53.497219 2152780 cri.go:89] found id: ""
	I0819 13:21:53.497254 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.497265 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:53.497273 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:53.497360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:53.533624 2152780 cri.go:89] found id: ""
	I0819 13:21:53.533658 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.533671 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:53.533682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:53.533761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:53.567453 2152780 cri.go:89] found id: ""
	I0819 13:21:53.567490 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.567501 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:53.567509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:53.567574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:53.603438 2152780 cri.go:89] found id: ""
	I0819 13:21:53.603472 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.603482 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:53.603490 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:53.603570 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:53.637608 2152780 cri.go:89] found id: ""
	I0819 13:21:53.637648 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.637659 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:53.637667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:53.637742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:53.684476 2152780 cri.go:89] found id: ""
	I0819 13:21:53.684514 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.684524 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:53.684534 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:53.684550 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:53.699195 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:53.699227 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:53.766980 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:53.767000 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:53.767013 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:53.844890 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:53.844924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:53.888284 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:53.888318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:51.805011 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.805710 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.460867 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.462350 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.962100 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.845091 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.845602 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:57.848944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:56.442668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:56.455936 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:56.456019 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:56.494363 2152780 cri.go:89] found id: ""
	I0819 13:21:56.494403 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.494414 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:56.494423 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:56.494498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:56.531901 2152780 cri.go:89] found id: ""
	I0819 13:21:56.531933 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.531943 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:56.531950 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:56.532046 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:56.569677 2152780 cri.go:89] found id: ""
	I0819 13:21:56.569706 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.569715 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:56.569721 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:56.569787 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:56.608109 2152780 cri.go:89] found id: ""
	I0819 13:21:56.608136 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.608147 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:56.608153 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:56.608240 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:56.646753 2152780 cri.go:89] found id: ""
	I0819 13:21:56.646791 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.646803 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:56.646810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:56.646879 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:56.686029 2152780 cri.go:89] found id: ""
	I0819 13:21:56.686053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.686061 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:56.686067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:56.686131 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:56.724838 2152780 cri.go:89] found id: ""
	I0819 13:21:56.724874 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.724886 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:56.724895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:56.724963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:56.762368 2152780 cri.go:89] found id: ""
	I0819 13:21:56.762399 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.762407 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:56.762422 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:56.762434 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:56.815047 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:56.815103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:56.830783 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:56.830809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:56.908963 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:56.908990 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:56.909008 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:56.991337 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:56.991372 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:59.532005 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:59.545733 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:59.545823 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:59.584082 2152780 cri.go:89] found id: ""
	I0819 13:21:59.584111 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.584121 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:59.584130 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:59.584216 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:59.618041 2152780 cri.go:89] found id: ""
	I0819 13:21:59.618075 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.618086 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:59.618094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:59.618171 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:59.651720 2152780 cri.go:89] found id: ""
	I0819 13:21:59.651753 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.651764 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:59.651772 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:59.651844 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:59.686447 2152780 cri.go:89] found id: ""
	I0819 13:21:59.686471 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.686479 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:59.686486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:59.686539 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:59.719583 2152780 cri.go:89] found id: ""
	I0819 13:21:59.719623 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.719635 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:59.719643 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:59.719714 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:59.753559 2152780 cri.go:89] found id: ""
	I0819 13:21:59.753593 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.753604 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:59.753613 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:59.753684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:59.789518 2152780 cri.go:89] found id: ""
	I0819 13:21:59.789554 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.789566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:59.789575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:59.789647 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:59.825772 2152780 cri.go:89] found id: ""
	I0819 13:21:59.825801 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.825812 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:59.825823 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:59.825837 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:59.878857 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:59.878899 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:59.894113 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:59.894146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:59.975274 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:59.975301 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:59.975318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:00.056121 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:00.056185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:56.304106 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.804232 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.805308 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.461819 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.961828 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.345176 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.345756 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.596366 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:02.610365 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:02.610439 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:02.646703 2152780 cri.go:89] found id: ""
	I0819 13:22:02.646729 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.646737 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:02.646743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:02.646800 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:02.682311 2152780 cri.go:89] found id: ""
	I0819 13:22:02.682345 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.682357 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:02.682364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:02.682441 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:02.718494 2152780 cri.go:89] found id: ""
	I0819 13:22:02.718525 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.718545 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:02.718551 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:02.718608 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.751952 2152780 cri.go:89] found id: ""
	I0819 13:22:02.751988 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.751999 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:02.752008 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:02.752074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:02.786156 2152780 cri.go:89] found id: ""
	I0819 13:22:02.786189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.786197 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:02.786204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:02.786261 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:02.823080 2152780 cri.go:89] found id: ""
	I0819 13:22:02.823114 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.823126 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:02.823134 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:02.823207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:02.860266 2152780 cri.go:89] found id: ""
	I0819 13:22:02.860299 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.860311 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:02.860320 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:02.860377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:02.903612 2152780 cri.go:89] found id: ""
	I0819 13:22:02.903643 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.903652 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:02.903667 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:02.903680 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:02.984771 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:02.984813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:03.027346 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:03.027378 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:03.077562 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:03.077608 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:03.092135 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:03.092190 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:03.168418 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:05.669468 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:05.682640 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:05.682706 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:05.717755 2152780 cri.go:89] found id: ""
	I0819 13:22:05.717785 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.717796 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:05.717805 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:05.717867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:05.757165 2152780 cri.go:89] found id: ""
	I0819 13:22:05.757191 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.757199 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:05.757204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:05.757265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:05.794991 2152780 cri.go:89] found id: ""
	I0819 13:22:05.795023 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.795035 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:05.795044 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:05.795124 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.805919 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.304915 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.963297 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.459786 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:04.844594 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.344680 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.829634 2152780 cri.go:89] found id: ""
	I0819 13:22:05.829664 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.829673 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:05.829680 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:05.829734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:05.865739 2152780 cri.go:89] found id: ""
	I0819 13:22:05.865767 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.865775 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:05.865781 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:05.865839 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:05.901395 2152780 cri.go:89] found id: ""
	I0819 13:22:05.901425 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.901435 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:05.901443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:05.901641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:05.937162 2152780 cri.go:89] found id: ""
	I0819 13:22:05.937190 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.937200 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:05.937207 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:05.937284 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:05.974302 2152780 cri.go:89] found id: ""
	I0819 13:22:05.974332 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.974342 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:05.974353 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:05.974367 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:06.012635 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:06.012664 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:06.064093 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:06.064129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:06.077792 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:06.077825 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:06.148954 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:06.148980 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:06.148999 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:08.731539 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:08.747311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:08.747414 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:08.782881 2152780 cri.go:89] found id: ""
	I0819 13:22:08.782913 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.782921 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:08.782927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:08.782990 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:08.817533 2152780 cri.go:89] found id: ""
	I0819 13:22:08.817569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.817580 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:08.817587 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:08.817657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:08.855614 2152780 cri.go:89] found id: ""
	I0819 13:22:08.855650 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.855664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:08.855672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:08.855742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:08.896405 2152780 cri.go:89] found id: ""
	I0819 13:22:08.896439 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.896447 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:08.896455 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:08.896525 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:08.936250 2152780 cri.go:89] found id: ""
	I0819 13:22:08.936279 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.936288 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:08.936296 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:08.936366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:08.976325 2152780 cri.go:89] found id: ""
	I0819 13:22:08.976357 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.976367 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:08.976376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:08.976446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:09.012921 2152780 cri.go:89] found id: ""
	I0819 13:22:09.012953 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.012962 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:09.012967 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:09.013026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:09.051385 2152780 cri.go:89] found id: ""
	I0819 13:22:09.051420 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.051432 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:09.051444 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:09.051461 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:09.105101 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:09.105146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:09.120289 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:09.120331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:09.216182 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:09.216220 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:09.216238 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:09.302832 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:09.302876 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:07.307100 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.803522 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.461843 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.961972 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.345855 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.845869 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.848734 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:11.861754 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:11.861838 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:11.897908 2152780 cri.go:89] found id: ""
	I0819 13:22:11.897940 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.897949 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:11.897956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:11.898013 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:11.935276 2152780 cri.go:89] found id: ""
	I0819 13:22:11.935309 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.935318 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:11.935326 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:11.935386 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:11.978577 2152780 cri.go:89] found id: ""
	I0819 13:22:11.978608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.978619 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:11.978626 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:11.978697 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:12.018464 2152780 cri.go:89] found id: ""
	I0819 13:22:12.018496 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.018505 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:12.018511 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:12.018567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:12.062371 2152780 cri.go:89] found id: ""
	I0819 13:22:12.062402 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.062412 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:12.062418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:12.062476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:12.102988 2152780 cri.go:89] found id: ""
	I0819 13:22:12.103017 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.103026 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:12.103033 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:12.103088 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:12.143553 2152780 cri.go:89] found id: ""
	I0819 13:22:12.143581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.143591 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:12.143599 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:12.143660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:12.187155 2152780 cri.go:89] found id: ""
	I0819 13:22:12.187192 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.187211 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:12.187224 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:12.187251 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:12.236028 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:12.236069 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:12.250047 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:12.250084 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:12.320586 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:12.320612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:12.320624 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:12.400274 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:12.400320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:14.939735 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:14.953283 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:14.953348 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:14.993818 2152780 cri.go:89] found id: ""
	I0819 13:22:14.993851 2152780 logs.go:276] 0 containers: []
	W0819 13:22:14.993862 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:14.993870 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:14.993940 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:15.035966 2152780 cri.go:89] found id: ""
	I0819 13:22:15.035997 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.036005 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:15.036012 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:15.036073 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:15.070730 2152780 cri.go:89] found id: ""
	I0819 13:22:15.070765 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.070777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:15.070785 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:15.070855 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:15.111830 2152780 cri.go:89] found id: ""
	I0819 13:22:15.111859 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.111868 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:15.111874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:15.111928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:15.144013 2152780 cri.go:89] found id: ""
	I0819 13:22:15.144039 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.144047 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:15.144053 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:15.144107 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:15.178050 2152780 cri.go:89] found id: ""
	I0819 13:22:15.178074 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.178083 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:15.178090 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:15.178145 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:15.212622 2152780 cri.go:89] found id: ""
	I0819 13:22:15.212647 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.212656 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:15.212662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:15.212716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:15.244744 2152780 cri.go:89] found id: ""
	I0819 13:22:15.244772 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.244780 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:15.244789 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:15.244804 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:15.284232 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:15.284266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:15.336738 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:15.336774 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:15.350801 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:15.350835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:15.426424 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:15.426453 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:15.426469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:11.809160 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.303978 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.962746 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:13.964030 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.344847 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.344951 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.009087 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:18.022428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:18.022500 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:18.062900 2152780 cri.go:89] found id: ""
	I0819 13:22:18.062930 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.062941 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:18.062949 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:18.063021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:18.096711 2152780 cri.go:89] found id: ""
	I0819 13:22:18.096752 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.096764 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:18.096773 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:18.096846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:18.129199 2152780 cri.go:89] found id: ""
	I0819 13:22:18.129226 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.129246 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:18.129253 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:18.129323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:18.161907 2152780 cri.go:89] found id: ""
	I0819 13:22:18.161944 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.161955 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:18.161963 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:18.162036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:18.198514 2152780 cri.go:89] found id: ""
	I0819 13:22:18.198557 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.198570 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:18.198577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:18.198648 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:18.230114 2152780 cri.go:89] found id: ""
	I0819 13:22:18.230142 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.230152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:18.230160 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:18.230221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:18.265610 2152780 cri.go:89] found id: ""
	I0819 13:22:18.265642 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.265654 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:18.265660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:18.265729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:18.298889 2152780 cri.go:89] found id: ""
	I0819 13:22:18.298929 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.298941 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:18.298952 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:18.298967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:18.381124 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:18.381165 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:18.422361 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:18.422394 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:18.482182 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:18.482215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:18.496054 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:18.496086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:18.568459 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:16.305471 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.804592 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.461760 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.961000 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:20.961568 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.844928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.345851 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.069389 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:21.082568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:21.082636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:21.117124 2152780 cri.go:89] found id: ""
	I0819 13:22:21.117158 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.117171 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:21.117180 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:21.117272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:21.151954 2152780 cri.go:89] found id: ""
	I0819 13:22:21.151981 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.151990 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:21.151996 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:21.152047 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:21.187411 2152780 cri.go:89] found id: ""
	I0819 13:22:21.187443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.187452 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:21.187461 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:21.187528 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:21.224118 2152780 cri.go:89] found id: ""
	I0819 13:22:21.224148 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.224156 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:21.224173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:21.224247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:21.261153 2152780 cri.go:89] found id: ""
	I0819 13:22:21.261189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.261198 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:21.261205 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:21.261274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:21.305404 2152780 cri.go:89] found id: ""
	I0819 13:22:21.305433 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.305444 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:21.305453 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:21.305534 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:21.358428 2152780 cri.go:89] found id: ""
	I0819 13:22:21.358462 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.358474 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:21.358482 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:21.358561 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:21.410971 2152780 cri.go:89] found id: ""
	I0819 13:22:21.411005 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.411017 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:21.411029 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:21.411045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:21.462929 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:21.462963 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:21.476419 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:21.476446 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:21.544095 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.544116 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:21.544130 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:21.622802 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:21.622841 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.171988 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:24.185545 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:24.185624 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:24.221604 2152780 cri.go:89] found id: ""
	I0819 13:22:24.221632 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.221640 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:24.221646 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:24.221703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:24.255866 2152780 cri.go:89] found id: ""
	I0819 13:22:24.255903 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.255914 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:24.255923 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:24.255993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:24.290338 2152780 cri.go:89] found id: ""
	I0819 13:22:24.290381 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.290393 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:24.290401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:24.290465 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:24.325653 2152780 cri.go:89] found id: ""
	I0819 13:22:24.325685 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.325693 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:24.325700 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:24.325753 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:24.364081 2152780 cri.go:89] found id: ""
	I0819 13:22:24.364116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.364128 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:24.364136 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:24.364211 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:24.403464 2152780 cri.go:89] found id: ""
	I0819 13:22:24.403504 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.403516 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:24.403524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:24.403597 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:24.441205 2152780 cri.go:89] found id: ""
	I0819 13:22:24.441244 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.441256 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:24.441264 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:24.441340 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:24.478294 2152780 cri.go:89] found id: ""
	I0819 13:22:24.478324 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.478332 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:24.478341 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:24.478354 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:24.562969 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:24.563012 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.609296 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:24.609332 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:24.665537 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:24.665579 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:24.680086 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:24.680128 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:24.743465 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.305466 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.804623 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.461141 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.461784 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.844225 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.844485 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.844552 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.244137 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:27.257619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:27.257695 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:27.290229 2152780 cri.go:89] found id: ""
	I0819 13:22:27.290265 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.290274 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:27.290280 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:27.290354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:27.327185 2152780 cri.go:89] found id: ""
	I0819 13:22:27.327216 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.327225 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:27.327231 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:27.327302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:27.362949 2152780 cri.go:89] found id: ""
	I0819 13:22:27.362978 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.362999 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:27.363007 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:27.363074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:27.404085 2152780 cri.go:89] found id: ""
	I0819 13:22:27.404116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.404125 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:27.404132 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:27.404237 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:27.439884 2152780 cri.go:89] found id: ""
	I0819 13:22:27.439922 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.439933 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:27.439942 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:27.440006 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:27.473933 2152780 cri.go:89] found id: ""
	I0819 13:22:27.473961 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.473970 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:27.473976 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:27.474033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:27.510093 2152780 cri.go:89] found id: ""
	I0819 13:22:27.510121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.510131 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:27.510137 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:27.510188 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:27.549178 2152780 cri.go:89] found id: ""
	I0819 13:22:27.549205 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.549213 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:27.549223 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:27.549236 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:27.605886 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:27.605925 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:27.619506 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:27.619537 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:27.686821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:27.686849 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:27.686867 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:27.761275 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:27.761315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:30.301382 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:30.315140 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:30.315221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:30.352687 2152780 cri.go:89] found id: ""
	I0819 13:22:30.352710 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.352717 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:30.352723 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:30.352775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:30.388577 2152780 cri.go:89] found id: ""
	I0819 13:22:30.388604 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.388612 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:30.388618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:30.388671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:30.423937 2152780 cri.go:89] found id: ""
	I0819 13:22:30.423967 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.423977 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:30.423983 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:30.424039 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:30.456789 2152780 cri.go:89] found id: ""
	I0819 13:22:30.456823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.456834 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:30.456852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:30.456928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:30.495741 2152780 cri.go:89] found id: ""
	I0819 13:22:30.495778 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.495792 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:30.495801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:30.495856 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:30.532919 2152780 cri.go:89] found id: ""
	I0819 13:22:30.532947 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.532959 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:30.532972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:30.533036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:30.574556 2152780 cri.go:89] found id: ""
	I0819 13:22:30.574591 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.574602 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:30.574615 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:30.574685 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:30.610248 2152780 cri.go:89] found id: ""
	I0819 13:22:30.610280 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.610291 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:30.610303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:30.610319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:30.664302 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:30.664356 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:30.678529 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:30.678571 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:30.753412 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:30.753441 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:30.753460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:26.304403 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:28.305270 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.804282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.462817 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:29.463222 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.344268 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:32.344384 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.831754 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:30.831795 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:33.375305 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:33.390064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:33.390139 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:33.426250 2152780 cri.go:89] found id: ""
	I0819 13:22:33.426281 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.426292 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:33.426301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:33.426356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:33.464229 2152780 cri.go:89] found id: ""
	I0819 13:22:33.464255 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.464263 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:33.464270 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:33.464329 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:33.501098 2152780 cri.go:89] found id: ""
	I0819 13:22:33.501134 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.501145 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:33.501154 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:33.501223 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:33.535617 2152780 cri.go:89] found id: ""
	I0819 13:22:33.535651 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.535662 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:33.535670 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:33.535738 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:33.570553 2152780 cri.go:89] found id: ""
	I0819 13:22:33.570600 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.570613 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:33.570622 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:33.570694 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:33.605414 2152780 cri.go:89] found id: ""
	I0819 13:22:33.605454 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.605466 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:33.605477 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:33.605545 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:33.637564 2152780 cri.go:89] found id: ""
	I0819 13:22:33.637599 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.637610 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:33.637618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:33.637681 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:33.671403 2152780 cri.go:89] found id: ""
	I0819 13:22:33.671438 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.671450 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:33.671463 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:33.671482 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:33.724121 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:33.724159 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:33.737041 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:33.737070 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:33.808305 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:33.808329 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:33.808342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:33.886089 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:33.886136 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:32.806000 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:35.303503 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:31.463294 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:33.962798 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:34.344950 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.845547 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.424035 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:36.439329 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:36.439407 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:36.486348 2152780 cri.go:89] found id: ""
	I0819 13:22:36.486375 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.486383 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:36.486391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:36.486455 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:36.522796 2152780 cri.go:89] found id: ""
	I0819 13:22:36.522823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.522834 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:36.522843 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:36.522909 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:36.556528 2152780 cri.go:89] found id: ""
	I0819 13:22:36.556561 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.556573 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:36.556580 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:36.556652 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:36.592545 2152780 cri.go:89] found id: ""
	I0819 13:22:36.592581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.592591 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:36.592600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:36.592671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:36.628668 2152780 cri.go:89] found id: ""
	I0819 13:22:36.628696 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.628705 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:36.628711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:36.628765 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:36.660783 2152780 cri.go:89] found id: ""
	I0819 13:22:36.660817 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.660825 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:36.660832 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:36.660886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:36.694853 2152780 cri.go:89] found id: ""
	I0819 13:22:36.694883 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.694893 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:36.694901 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:36.694969 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:36.727767 2152780 cri.go:89] found id: ""
	I0819 13:22:36.727797 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.727805 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:36.727816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:36.727830 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:36.741032 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:36.741064 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:36.816015 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:36.816045 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:36.816062 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:36.896272 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:36.896319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:36.933895 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:36.933936 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:39.495808 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:39.511071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:39.511158 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:39.551412 2152780 cri.go:89] found id: ""
	I0819 13:22:39.551447 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.551458 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:39.551467 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:39.551523 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:39.588796 2152780 cri.go:89] found id: ""
	I0819 13:22:39.588827 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.588836 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:39.588842 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:39.588904 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:39.627115 2152780 cri.go:89] found id: ""
	I0819 13:22:39.627144 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.627153 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:39.627159 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:39.627222 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:39.662372 2152780 cri.go:89] found id: ""
	I0819 13:22:39.662405 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.662413 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:39.662420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:39.662478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:39.696239 2152780 cri.go:89] found id: ""
	I0819 13:22:39.696272 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.696282 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:39.696298 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:39.696363 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:39.731334 2152780 cri.go:89] found id: ""
	I0819 13:22:39.731372 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.731383 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:39.731391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:39.731454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:39.764368 2152780 cri.go:89] found id: ""
	I0819 13:22:39.764400 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.764412 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:39.764420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:39.764488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:39.798994 2152780 cri.go:89] found id: ""
	I0819 13:22:39.799026 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.799038 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:39.799050 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:39.799066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:39.812935 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:39.812965 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:39.882867 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:39.882898 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:39.882913 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:39.968067 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:39.968102 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:40.007122 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:40.007158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:37.304584 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.305417 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.462377 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:38.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.344771 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.355221 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:42.560811 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:42.574968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:42.575034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:42.609476 2152780 cri.go:89] found id: ""
	I0819 13:22:42.609516 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.609529 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:42.609537 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:42.609607 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:42.650763 2152780 cri.go:89] found id: ""
	I0819 13:22:42.650794 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.650804 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:42.650812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:42.650885 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:42.686330 2152780 cri.go:89] found id: ""
	I0819 13:22:42.686358 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.686369 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:42.686377 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:42.686446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:42.720937 2152780 cri.go:89] found id: ""
	I0819 13:22:42.720963 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.720971 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:42.720982 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:42.721036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:42.755079 2152780 cri.go:89] found id: ""
	I0819 13:22:42.755122 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.755132 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:42.755141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:42.755212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:42.787680 2152780 cri.go:89] found id: ""
	I0819 13:22:42.787715 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.787727 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:42.787738 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:42.787807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:42.824768 2152780 cri.go:89] found id: ""
	I0819 13:22:42.824799 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.824807 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:42.824813 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:42.824874 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:42.859902 2152780 cri.go:89] found id: ""
	I0819 13:22:42.859935 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.859946 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:42.859957 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:42.859972 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:42.911169 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:42.911215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:42.925108 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:42.925140 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:43.002506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:43.002557 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:43.002574 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:43.084408 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:43.084451 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:45.625360 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:45.638948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:45.639033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:45.675838 2152780 cri.go:89] found id: ""
	I0819 13:22:45.675874 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.675887 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:45.675895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:45.675971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:45.708726 2152780 cri.go:89] found id: ""
	I0819 13:22:45.708757 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.708768 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:45.708776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:45.708842 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:45.749704 2152780 cri.go:89] found id: ""
	I0819 13:22:45.749739 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.749752 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:45.749769 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:45.749835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:45.786426 2152780 cri.go:89] found id: ""
	I0819 13:22:45.786457 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.786469 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:45.786478 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:45.786547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:41.803829 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.804465 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.805120 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.462336 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.961582 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.845489 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.845943 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.822363 2152780 cri.go:89] found id: ""
	I0819 13:22:45.822395 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.822403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:45.822408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:45.822472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:45.856308 2152780 cri.go:89] found id: ""
	I0819 13:22:45.856337 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.856346 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:45.856353 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:45.856421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:45.893379 2152780 cri.go:89] found id: ""
	I0819 13:22:45.893417 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.893427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:45.893435 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:45.893503 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:45.929841 2152780 cri.go:89] found id: ""
	I0819 13:22:45.929868 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.929879 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:45.929890 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:45.929943 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:45.980392 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:45.980432 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:45.994781 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:45.994809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:46.065633 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:46.065658 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:46.065675 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:46.140938 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:46.140976 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.680055 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:48.692972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:48.693053 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:48.736198 2152780 cri.go:89] found id: ""
	I0819 13:22:48.736232 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.736244 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:48.736252 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:48.736332 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:48.775819 2152780 cri.go:89] found id: ""
	I0819 13:22:48.775855 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.775872 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:48.775881 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:48.775956 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:48.814090 2152780 cri.go:89] found id: ""
	I0819 13:22:48.814121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.814130 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:48.814139 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:48.814207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:48.849630 2152780 cri.go:89] found id: ""
	I0819 13:22:48.849656 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.849665 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:48.849671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:48.849730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:48.887945 2152780 cri.go:89] found id: ""
	I0819 13:22:48.887982 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.887995 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:48.888003 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:48.888072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:48.924105 2152780 cri.go:89] found id: ""
	I0819 13:22:48.924145 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.924158 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:48.924184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:48.924266 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:48.960481 2152780 cri.go:89] found id: ""
	I0819 13:22:48.960521 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.960532 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:48.960540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:48.960611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:48.998826 2152780 cri.go:89] found id: ""
	I0819 13:22:48.998856 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.998865 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:48.998874 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:48.998888 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:49.053064 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:49.053103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:49.067115 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:49.067154 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:49.146731 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:49.146753 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:49.146767 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:49.232228 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:49.232268 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.303393 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.305013 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:46.461905 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.961570 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.961743 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.344068 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.844551 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:51.775692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:51.789766 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:51.789848 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:51.830997 2152780 cri.go:89] found id: ""
	I0819 13:22:51.831029 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.831038 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:51.831045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:51.831105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:51.870122 2152780 cri.go:89] found id: ""
	I0819 13:22:51.870150 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.870160 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:51.870168 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:51.870247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:51.911170 2152780 cri.go:89] found id: ""
	I0819 13:22:51.911198 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.911206 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:51.911216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:51.911280 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:51.946159 2152780 cri.go:89] found id: ""
	I0819 13:22:51.946197 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.946218 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:51.946225 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:51.946300 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:51.981131 2152780 cri.go:89] found id: ""
	I0819 13:22:51.981160 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.981170 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:51.981176 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:51.981251 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:52.024417 2152780 cri.go:89] found id: ""
	I0819 13:22:52.024456 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.024477 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:52.024486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:52.024567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:52.062519 2152780 cri.go:89] found id: ""
	I0819 13:22:52.062552 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.062563 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:52.062571 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:52.062640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:52.106259 2152780 cri.go:89] found id: ""
	I0819 13:22:52.106291 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.106302 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:52.106314 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:52.106338 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:52.158681 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:52.158723 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:52.172489 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:52.172523 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:52.236862 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:52.236892 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:52.236910 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:52.319287 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:52.319331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:54.864061 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:54.878753 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:54.878834 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:54.913195 2152780 cri.go:89] found id: ""
	I0819 13:22:54.913225 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.913240 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:54.913247 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:54.913314 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:54.954867 2152780 cri.go:89] found id: ""
	I0819 13:22:54.954909 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.954922 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:54.954930 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:54.954995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:54.992570 2152780 cri.go:89] found id: ""
	I0819 13:22:54.992601 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.992610 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:54.992616 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:54.992683 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:55.027056 2152780 cri.go:89] found id: ""
	I0819 13:22:55.027090 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.027101 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:55.027108 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:55.027190 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:55.061941 2152780 cri.go:89] found id: ""
	I0819 13:22:55.061973 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.061985 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:55.061994 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:55.062061 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:55.100414 2152780 cri.go:89] found id: ""
	I0819 13:22:55.100443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.100455 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:55.100465 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:55.100535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:55.143797 2152780 cri.go:89] found id: ""
	I0819 13:22:55.143836 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.143848 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:55.143856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:55.143929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:55.187540 2152780 cri.go:89] found id: ""
	I0819 13:22:55.187569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.187577 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:55.187587 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:55.187600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:55.241075 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:55.241110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:55.254813 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:55.254846 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:55.325934 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:55.325955 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:55.325974 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:55.404725 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:55.404775 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:52.804423 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.303884 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:52.963783 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.461517 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:53.345242 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.346054 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.843535 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.949501 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:57.964664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:57.964730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:58.005114 2152780 cri.go:89] found id: ""
	I0819 13:22:58.005153 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.005164 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:58.005173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:58.005274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:58.038596 2152780 cri.go:89] found id: ""
	I0819 13:22:58.038629 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.038639 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:58.038644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:58.038700 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:58.081597 2152780 cri.go:89] found id: ""
	I0819 13:22:58.081631 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.081640 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:58.081650 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:58.081735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:58.120356 2152780 cri.go:89] found id: ""
	I0819 13:22:58.120389 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.120400 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:58.120408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:58.120478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:58.156313 2152780 cri.go:89] found id: ""
	I0819 13:22:58.156346 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.156358 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:58.156366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:58.156442 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:58.192198 2152780 cri.go:89] found id: ""
	I0819 13:22:58.192238 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.192249 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:58.192257 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:58.192335 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:58.226574 2152780 cri.go:89] found id: ""
	I0819 13:22:58.226608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.226622 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:58.226631 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:58.226705 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:58.260060 2152780 cri.go:89] found id: ""
	I0819 13:22:58.260088 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.260096 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:58.260106 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:58.260119 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:58.338242 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:58.338289 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:58.382245 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:58.382286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:58.436273 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:58.436314 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:58.452143 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:58.452215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:58.530133 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:57.305258 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.804142 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.462332 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.962480 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.844922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:01.030732 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:01.044083 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:01.044155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:01.083082 2152780 cri.go:89] found id: ""
	I0819 13:23:01.083114 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.083122 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:01.083129 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:01.083197 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:01.122645 2152780 cri.go:89] found id: ""
	I0819 13:23:01.122680 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.122700 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:01.122708 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:01.122777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:01.163144 2152780 cri.go:89] found id: ""
	I0819 13:23:01.163175 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.163187 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:01.163194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:01.163262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:01.201545 2152780 cri.go:89] found id: ""
	I0819 13:23:01.201587 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.201597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:01.201605 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:01.201693 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:01.237047 2152780 cri.go:89] found id: ""
	I0819 13:23:01.237075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.237086 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:01.237094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:01.237162 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:01.276531 2152780 cri.go:89] found id: ""
	I0819 13:23:01.276560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.276571 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:01.276577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:01.276633 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:01.315639 2152780 cri.go:89] found id: ""
	I0819 13:23:01.315668 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.315676 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:01.315682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:01.315734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:01.351942 2152780 cri.go:89] found id: ""
	I0819 13:23:01.351969 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.351980 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:01.351990 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:01.352006 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:01.403068 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:01.403111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:01.416608 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:01.416639 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:01.493450 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:01.493483 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:01.493501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:01.577709 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:01.577750 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:04.146865 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:04.160366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:04.160431 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:04.194463 2152780 cri.go:89] found id: ""
	I0819 13:23:04.194494 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.194504 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:04.194510 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:04.194567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:04.228327 2152780 cri.go:89] found id: ""
	I0819 13:23:04.228351 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.228359 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:04.228364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:04.228413 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:04.261613 2152780 cri.go:89] found id: ""
	I0819 13:23:04.261645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.261656 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:04.261664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:04.261740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:04.294583 2152780 cri.go:89] found id: ""
	I0819 13:23:04.294617 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.294628 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:04.294636 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:04.294712 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:04.328537 2152780 cri.go:89] found id: ""
	I0819 13:23:04.328566 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.328576 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:04.328585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:04.328657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:04.361865 2152780 cri.go:89] found id: ""
	I0819 13:23:04.361906 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.361917 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:04.361925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:04.361988 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:04.396308 2152780 cri.go:89] found id: ""
	I0819 13:23:04.396343 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.396354 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:04.396361 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:04.396433 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:04.429551 2152780 cri.go:89] found id: ""
	I0819 13:23:04.429580 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.429591 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:04.429602 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:04.429618 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:04.479557 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:04.479600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:04.493746 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:04.493781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:04.563309 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:04.563333 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:04.563347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:04.644413 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:04.644455 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:02.305047 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.804564 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.460986 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.461977 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.845028 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.345778 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.195678 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:07.209255 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:07.209323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:07.245267 2152780 cri.go:89] found id: ""
	I0819 13:23:07.245295 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.245306 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:07.245324 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:07.245394 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:07.283190 2152780 cri.go:89] found id: ""
	I0819 13:23:07.283223 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.283234 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:07.283242 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:07.283312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:07.330025 2152780 cri.go:89] found id: ""
	I0819 13:23:07.330053 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.330065 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:07.330072 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:07.330137 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:07.381649 2152780 cri.go:89] found id: ""
	I0819 13:23:07.381675 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.381684 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:07.381690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:07.381744 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:07.436759 2152780 cri.go:89] found id: ""
	I0819 13:23:07.436794 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.436806 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:07.436814 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:07.436890 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:07.475843 2152780 cri.go:89] found id: ""
	I0819 13:23:07.475876 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.475885 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:07.475891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:07.475954 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:07.510531 2152780 cri.go:89] found id: ""
	I0819 13:23:07.510560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.510569 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:07.510575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:07.510641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:07.546041 2152780 cri.go:89] found id: ""
	I0819 13:23:07.546075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.546087 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:07.546100 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:07.546118 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:07.559102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:07.559137 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:07.631002 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:07.631030 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:07.631045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:07.705711 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:07.705751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:07.745114 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:07.745148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.302515 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:10.316028 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:10.316108 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:10.357431 2152780 cri.go:89] found id: ""
	I0819 13:23:10.357468 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.357477 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:10.357483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:10.357546 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:10.393541 2152780 cri.go:89] found id: ""
	I0819 13:23:10.393568 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.393577 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:10.393583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:10.393651 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:10.429651 2152780 cri.go:89] found id: ""
	I0819 13:23:10.429684 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.429692 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:10.429698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:10.429752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:10.469626 2152780 cri.go:89] found id: ""
	I0819 13:23:10.469690 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.469701 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:10.469706 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:10.469761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:10.503015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.503048 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.503058 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:10.503065 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:10.503133 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:10.538127 2152780 cri.go:89] found id: ""
	I0819 13:23:10.538168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.538180 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:10.538188 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:10.538260 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:10.573015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.573050 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.573058 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:10.573064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:10.573134 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:10.608094 2152780 cri.go:89] found id: ""
	I0819 13:23:10.608125 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.608136 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:10.608150 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:10.608185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:10.688926 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:10.688967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:10.724554 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:10.724588 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.777619 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:10.777652 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:10.791106 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:10.791143 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:23:07.305198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.804759 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:06.961224 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.460095 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.843726 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.845072 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	W0819 13:23:10.865964 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.366960 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:13.381903 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:13.381973 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:13.420589 2152780 cri.go:89] found id: ""
	I0819 13:23:13.420620 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.420630 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:13.420637 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:13.420724 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:13.457809 2152780 cri.go:89] found id: ""
	I0819 13:23:13.457840 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.457850 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:13.457863 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:13.457927 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:13.496310 2152780 cri.go:89] found id: ""
	I0819 13:23:13.496347 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.496359 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:13.496368 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:13.496436 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:13.532057 2152780 cri.go:89] found id: ""
	I0819 13:23:13.532089 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.532097 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:13.532103 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:13.532183 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:13.575780 2152780 cri.go:89] found id: ""
	I0819 13:23:13.575814 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.575823 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:13.575829 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:13.575896 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:13.611484 2152780 cri.go:89] found id: ""
	I0819 13:23:13.611510 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.611520 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:13.611528 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:13.611594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:13.647456 2152780 cri.go:89] found id: ""
	I0819 13:23:13.647483 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.647491 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:13.647497 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:13.647569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:13.681181 2152780 cri.go:89] found id: ""
	I0819 13:23:13.681212 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.681224 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:13.681237 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:13.681266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:13.694680 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:13.694709 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:13.767546 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.767576 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:13.767597 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:13.851609 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:13.851649 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:13.893739 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:13.893778 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:12.304391 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.304840 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.962978 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.461299 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.344830 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.844720 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.446686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:16.459850 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:16.459928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:16.498671 2152780 cri.go:89] found id: ""
	I0819 13:23:16.498703 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.498712 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:16.498718 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:16.498775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:16.534655 2152780 cri.go:89] found id: ""
	I0819 13:23:16.534691 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.534704 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:16.534716 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:16.534781 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:16.571279 2152780 cri.go:89] found id: ""
	I0819 13:23:16.571317 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.571330 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:16.571337 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:16.571402 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:16.604843 2152780 cri.go:89] found id: ""
	I0819 13:23:16.604878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.604889 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:16.604897 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:16.604963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:16.641150 2152780 cri.go:89] found id: ""
	I0819 13:23:16.641181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.641190 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:16.641201 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:16.641272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:16.676737 2152780 cri.go:89] found id: ""
	I0819 13:23:16.676763 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.676773 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:16.676782 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:16.676847 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:16.709876 2152780 cri.go:89] found id: ""
	I0819 13:23:16.709908 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.709920 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:16.709928 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:16.709991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:16.745249 2152780 cri.go:89] found id: ""
	I0819 13:23:16.745277 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.745293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:16.745304 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:16.745320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:16.796364 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:16.796400 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:16.811859 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:16.811887 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:16.886118 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:16.886144 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:16.886158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.968304 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:16.968349 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.511448 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:19.523952 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:19.524023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:19.556661 2152780 cri.go:89] found id: ""
	I0819 13:23:19.556696 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.556708 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:19.556715 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:19.556784 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:19.595956 2152780 cri.go:89] found id: ""
	I0819 13:23:19.595986 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.595995 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:19.596001 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:19.596055 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:19.631993 2152780 cri.go:89] found id: ""
	I0819 13:23:19.632021 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.632030 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:19.632038 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:19.632126 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:19.667871 2152780 cri.go:89] found id: ""
	I0819 13:23:19.667924 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.667935 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:19.667944 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:19.668007 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:19.703585 2152780 cri.go:89] found id: ""
	I0819 13:23:19.703615 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.703624 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:19.703630 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:19.703684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:19.737220 2152780 cri.go:89] found id: ""
	I0819 13:23:19.737252 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.737263 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:19.737272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:19.737343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:19.775526 2152780 cri.go:89] found id: ""
	I0819 13:23:19.775570 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.775587 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:19.775595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:19.775671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:19.810611 2152780 cri.go:89] found id: ""
	I0819 13:23:19.810645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.810656 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:19.810667 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:19.810682 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.855138 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:19.855167 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:19.906988 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:19.907025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:19.920451 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:19.920481 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:19.989054 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:19.989086 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:19.989101 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.803686 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.804090 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.804422 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.463889 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.961085 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.961248 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.845050 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:21.344704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.571624 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:22.585585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:22.585667 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:22.620456 2152780 cri.go:89] found id: ""
	I0819 13:23:22.620488 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.620497 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:22.620509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:22.620562 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:22.658971 2152780 cri.go:89] found id: ""
	I0819 13:23:22.659005 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.659017 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:22.659024 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:22.659094 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:22.696236 2152780 cri.go:89] found id: ""
	I0819 13:23:22.696269 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.696281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:22.696290 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:22.696354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:22.735260 2152780 cri.go:89] found id: ""
	I0819 13:23:22.735296 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.735306 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:22.735313 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:22.735365 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.770593 2152780 cri.go:89] found id: ""
	I0819 13:23:22.770625 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.770637 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:22.770644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:22.770716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:22.806348 2152780 cri.go:89] found id: ""
	I0819 13:23:22.806375 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.806400 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:22.806418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:22.806489 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:22.843812 2152780 cri.go:89] found id: ""
	I0819 13:23:22.843841 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.843850 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:22.843858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:22.843929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:22.880607 2152780 cri.go:89] found id: ""
	I0819 13:23:22.880641 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.880655 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:22.880669 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:22.880695 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:22.930816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:22.930855 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:22.946795 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:22.946829 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:23.021065 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:23.021094 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:23.021110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:23.098083 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:23.098129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:25.641104 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:25.654226 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:25.654297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:25.689403 2152780 cri.go:89] found id: ""
	I0819 13:23:25.689429 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.689437 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:25.689445 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:25.689496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:25.725340 2152780 cri.go:89] found id: ""
	I0819 13:23:25.725380 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.725393 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:25.725401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:25.725472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:25.759133 2152780 cri.go:89] found id: ""
	I0819 13:23:25.759168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.759178 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:25.759184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:25.759253 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:25.793605 2152780 cri.go:89] found id: ""
	I0819 13:23:25.793636 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.793649 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:25.793657 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:25.793731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.804810 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.305119 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.460663 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:23.344790 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.344975 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.843772 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.828355 2152780 cri.go:89] found id: ""
	I0819 13:23:25.828391 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.828403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:25.828411 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:25.828488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:25.869621 2152780 cri.go:89] found id: ""
	I0819 13:23:25.869658 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.869669 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:25.869676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:25.869731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:25.907749 2152780 cri.go:89] found id: ""
	I0819 13:23:25.907786 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.907795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:25.907801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:25.907870 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:25.942443 2152780 cri.go:89] found id: ""
	I0819 13:23:25.942472 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.942484 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:25.942497 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:25.942515 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:26.016016 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:26.016044 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:26.016058 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:26.097427 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:26.097475 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:26.140275 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:26.140320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:26.189858 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:26.189894 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:28.705027 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:28.717999 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:28.718071 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:28.752233 2152780 cri.go:89] found id: ""
	I0819 13:23:28.752263 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.752279 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:28.752286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:28.752343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:28.786033 2152780 cri.go:89] found id: ""
	I0819 13:23:28.786071 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.786080 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:28.786086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:28.786143 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:28.821105 2152780 cri.go:89] found id: ""
	I0819 13:23:28.821135 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.821165 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:28.821173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:28.821243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:28.859179 2152780 cri.go:89] found id: ""
	I0819 13:23:28.859226 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.859237 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:28.859244 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:28.859319 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:28.894213 2152780 cri.go:89] found id: ""
	I0819 13:23:28.894250 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.894263 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:28.894272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:28.894351 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:28.927807 2152780 cri.go:89] found id: ""
	I0819 13:23:28.927836 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.927848 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:28.927856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:28.927924 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:28.965430 2152780 cri.go:89] found id: ""
	I0819 13:23:28.965464 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.965476 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:28.965485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:28.965555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:29.008894 2152780 cri.go:89] found id: ""
	I0819 13:23:29.008920 2152780 logs.go:276] 0 containers: []
	W0819 13:23:29.008928 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:29.008938 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:29.008951 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:29.089617 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:29.089644 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:29.089657 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:29.173271 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:29.173310 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:29.216057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:29.216098 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:29.267076 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:29.267131 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:27.804872 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:30.304909 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.461788 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.961548 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.844049 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.846007 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.782574 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:31.797583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:31.797655 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:31.836889 2152780 cri.go:89] found id: ""
	I0819 13:23:31.836926 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.836939 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:31.836948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:31.837021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:31.875693 2152780 cri.go:89] found id: ""
	I0819 13:23:31.875725 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.875734 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:31.875740 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:31.875813 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:31.909847 2152780 cri.go:89] found id: ""
	I0819 13:23:31.909878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.909887 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:31.909895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:31.909971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:31.944122 2152780 cri.go:89] found id: ""
	I0819 13:23:31.944155 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.944184 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:31.944193 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:31.944265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:31.979342 2152780 cri.go:89] found id: ""
	I0819 13:23:31.979373 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.979383 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:31.979390 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:31.979461 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:32.013001 2152780 cri.go:89] found id: ""
	I0819 13:23:32.013049 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.013060 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:32.013067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:32.013148 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:32.047975 2152780 cri.go:89] found id: ""
	I0819 13:23:32.048003 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.048012 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:32.048017 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:32.048074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:32.082151 2152780 cri.go:89] found id: ""
	I0819 13:23:32.082181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.082192 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:32.082205 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:32.082241 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:32.120254 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:32.120286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.171594 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:32.171634 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:32.185654 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:32.185687 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:32.251542 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:32.251568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:32.251586 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:34.831302 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:34.846902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:34.846980 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:34.883697 2152780 cri.go:89] found id: ""
	I0819 13:23:34.883731 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.883747 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:34.883756 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:34.883821 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:34.918184 2152780 cri.go:89] found id: ""
	I0819 13:23:34.918220 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.918232 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:34.918240 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:34.918321 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:34.951621 2152780 cri.go:89] found id: ""
	I0819 13:23:34.951653 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.951664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:34.951673 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:34.951735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:34.987330 2152780 cri.go:89] found id: ""
	I0819 13:23:34.987362 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.987373 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:34.987381 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:34.987454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:35.022480 2152780 cri.go:89] found id: ""
	I0819 13:23:35.022518 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.022531 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:35.022539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:35.022606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:35.059717 2152780 cri.go:89] found id: ""
	I0819 13:23:35.059753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.059769 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:35.059776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:35.059835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:35.098877 2152780 cri.go:89] found id: ""
	I0819 13:23:35.098912 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.098924 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:35.098932 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:35.098995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:35.137392 2152780 cri.go:89] found id: ""
	I0819 13:23:35.137422 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.137429 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:35.137438 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:35.137456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:35.151134 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:35.151177 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:35.216821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:35.216852 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:35.216870 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:35.300662 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:35.300717 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:35.344627 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:35.344661 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.803063 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.804202 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.962287 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.460360 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.344876 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.345734 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:37.899152 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:37.913091 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:37.913161 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:37.949684 2152780 cri.go:89] found id: ""
	I0819 13:23:37.949713 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.949722 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:37.949728 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:37.949791 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:37.984480 2152780 cri.go:89] found id: ""
	I0819 13:23:37.984509 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.984518 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:37.984524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:37.984591 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:38.022228 2152780 cri.go:89] found id: ""
	I0819 13:23:38.022264 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.022277 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:38.022285 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:38.022360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:38.056117 2152780 cri.go:89] found id: ""
	I0819 13:23:38.056147 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.056159 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:38.056184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:38.056252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:38.089157 2152780 cri.go:89] found id: ""
	I0819 13:23:38.089187 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.089204 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:38.089212 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:38.089279 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:38.123335 2152780 cri.go:89] found id: ""
	I0819 13:23:38.123369 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.123378 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:38.123384 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:38.123444 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:38.159555 2152780 cri.go:89] found id: ""
	I0819 13:23:38.159584 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.159596 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:38.159604 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:38.159671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:38.200728 2152780 cri.go:89] found id: ""
	I0819 13:23:38.200765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.200777 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:38.200789 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:38.200809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:38.251652 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:38.251684 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:38.264834 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:38.264862 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:38.341259 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:38.341287 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:38.341304 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:38.417381 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:38.417423 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:36.805360 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:39.303731 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.461458 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.962012 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.347922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.845195 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.959227 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:40.972845 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:40.972937 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:41.009180 2152780 cri.go:89] found id: ""
	I0819 13:23:41.009219 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.009233 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:41.009241 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:41.009313 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:41.044055 2152780 cri.go:89] found id: ""
	I0819 13:23:41.044088 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.044100 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:41.044109 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:41.044203 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:41.081073 2152780 cri.go:89] found id: ""
	I0819 13:23:41.081098 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.081108 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:41.081114 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:41.081166 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:41.113973 2152780 cri.go:89] found id: ""
	I0819 13:23:41.114011 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.114024 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:41.114031 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:41.114105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:41.149480 2152780 cri.go:89] found id: ""
	I0819 13:23:41.149515 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.149527 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:41.149539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:41.149632 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:41.183756 2152780 cri.go:89] found id: ""
	I0819 13:23:41.183788 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.183800 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:41.183808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:41.183884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:41.217629 2152780 cri.go:89] found id: ""
	I0819 13:23:41.217670 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.217683 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:41.217691 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:41.217755 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:41.254948 2152780 cri.go:89] found id: ""
	I0819 13:23:41.254987 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.255000 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:41.255011 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:41.255025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:41.293080 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:41.293116 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.349391 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:41.349437 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:41.363720 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:41.363751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:41.459273 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:41.459299 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:41.459315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.043239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:44.057981 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:44.058072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:44.091216 2152780 cri.go:89] found id: ""
	I0819 13:23:44.091247 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.091256 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:44.091263 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:44.091336 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:44.128729 2152780 cri.go:89] found id: ""
	I0819 13:23:44.128761 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.128772 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:44.128779 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:44.128865 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:44.167730 2152780 cri.go:89] found id: ""
	I0819 13:23:44.167765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.167777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:44.167784 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:44.167852 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:44.202323 2152780 cri.go:89] found id: ""
	I0819 13:23:44.202355 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.202366 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:44.202373 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:44.202451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:44.237030 2152780 cri.go:89] found id: ""
	I0819 13:23:44.237074 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.237088 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:44.237096 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:44.237164 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:44.273319 2152780 cri.go:89] found id: ""
	I0819 13:23:44.273356 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.273370 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:44.273378 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:44.273459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:44.313267 2152780 cri.go:89] found id: ""
	I0819 13:23:44.313300 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.313313 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:44.313322 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:44.313401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:44.353073 2152780 cri.go:89] found id: ""
	I0819 13:23:44.353099 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.353107 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:44.353115 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:44.353129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:44.366102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:44.366147 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:44.438580 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:44.438612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:44.438628 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.518200 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:44.518250 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:44.559362 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:44.559396 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.308823 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.804769 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:41.461933 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.961713 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.344606 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:45.344727 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.345821 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.114604 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:47.128989 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:47.129067 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:47.174286 2152780 cri.go:89] found id: ""
	I0819 13:23:47.174316 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.174324 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:47.174331 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:47.174391 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:47.208244 2152780 cri.go:89] found id: ""
	I0819 13:23:47.208282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.208291 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:47.208297 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:47.208360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:47.245609 2152780 cri.go:89] found id: ""
	I0819 13:23:47.245638 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.245645 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:47.245651 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:47.245703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:47.280617 2152780 cri.go:89] found id: ""
	I0819 13:23:47.280646 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.280655 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:47.280661 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:47.280726 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:47.317662 2152780 cri.go:89] found id: ""
	I0819 13:23:47.317695 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.317708 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:47.317719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:47.317789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:47.359113 2152780 cri.go:89] found id: ""
	I0819 13:23:47.359143 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.359152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:47.359158 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:47.359243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:47.396520 2152780 cri.go:89] found id: ""
	I0819 13:23:47.396550 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.396561 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:47.396568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:47.396636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:47.433043 2152780 cri.go:89] found id: ""
	I0819 13:23:47.433075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.433086 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:47.433097 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:47.433111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:47.491984 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:47.492023 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:47.505775 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:47.505805 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:47.573541 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:47.573568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:47.573584 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:47.657102 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:47.657148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.201855 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:50.216186 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:50.216250 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:50.254505 2152780 cri.go:89] found id: ""
	I0819 13:23:50.254542 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.254554 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:50.254561 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:50.254627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:50.288971 2152780 cri.go:89] found id: ""
	I0819 13:23:50.289006 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.289015 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:50.289021 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:50.289086 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:50.325243 2152780 cri.go:89] found id: ""
	I0819 13:23:50.325282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.325293 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:50.325302 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:50.325377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:50.361776 2152780 cri.go:89] found id: ""
	I0819 13:23:50.361810 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.361822 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:50.361830 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:50.361892 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:50.395380 2152780 cri.go:89] found id: ""
	I0819 13:23:50.395412 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.395422 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:50.395428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:50.395496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:50.429299 2152780 cri.go:89] found id: ""
	I0819 13:23:50.429336 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.429348 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:50.429356 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:50.429423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:50.471214 2152780 cri.go:89] found id: ""
	I0819 13:23:50.471236 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.471243 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:50.471249 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:50.471302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:50.510258 2152780 cri.go:89] found id: ""
	I0819 13:23:50.510298 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.510309 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:50.510322 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:50.510342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.547398 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:50.547431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:50.598425 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:50.598466 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:50.613308 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:50.613341 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:50.685803 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:50.685834 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:50.685851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:46.303155 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.305240 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.804319 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:46.461162 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.961356 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.961557 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:49.844368 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.345173 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:53.264007 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:53.277505 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:53.277596 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:53.320407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.320436 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.320444 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:53.320450 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:53.320508 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:53.364578 2152780 cri.go:89] found id: ""
	I0819 13:23:53.364628 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.364650 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:53.364660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:53.364732 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:53.404407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.404437 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.404446 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:53.404452 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:53.404522 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:53.440743 2152780 cri.go:89] found id: ""
	I0819 13:23:53.440775 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.440784 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:53.440791 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:53.440858 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:53.476151 2152780 cri.go:89] found id: ""
	I0819 13:23:53.476200 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.476211 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:53.476217 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:53.476297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:53.512531 2152780 cri.go:89] found id: ""
	I0819 13:23:53.512559 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.512567 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:53.512573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:53.512640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:53.562487 2152780 cri.go:89] found id: ""
	I0819 13:23:53.562524 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.562535 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:53.562543 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:53.562617 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:53.627507 2152780 cri.go:89] found id: ""
	I0819 13:23:53.627545 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.627556 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:53.627566 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:53.627587 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:53.686224 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:53.686270 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:53.700659 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:53.700693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:53.776506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:53.776544 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:53.776564 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:53.855375 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:53.855412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:53.305180 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.803826 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.962161 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:54.345531 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.845217 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.398105 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:56.411245 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:56.411312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:56.444097 2152780 cri.go:89] found id: ""
	I0819 13:23:56.444129 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.444138 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:56.444144 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:56.444214 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:56.480958 2152780 cri.go:89] found id: ""
	I0819 13:23:56.480993 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.481004 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:56.481013 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:56.481077 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:56.514657 2152780 cri.go:89] found id: ""
	I0819 13:23:56.514687 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.514696 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:56.514707 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:56.514762 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:56.548615 2152780 cri.go:89] found id: ""
	I0819 13:23:56.548647 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.548659 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:56.548667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:56.548735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:56.585801 2152780 cri.go:89] found id: ""
	I0819 13:23:56.585833 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.585844 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:56.585852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:56.585916 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:56.617804 2152780 cri.go:89] found id: ""
	I0819 13:23:56.617832 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.617843 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:56.617851 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:56.617919 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:56.655726 2152780 cri.go:89] found id: ""
	I0819 13:23:56.655753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.655761 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:56.655768 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:56.655820 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:56.691057 2152780 cri.go:89] found id: ""
	I0819 13:23:56.691087 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.691099 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:56.691111 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:56.691127 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:56.758836 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:56.758860 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:56.758879 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:56.836388 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:56.836427 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:56.875753 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:56.875781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:56.928126 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:56.928184 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.442721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:59.456581 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:59.456742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:59.492452 2152780 cri.go:89] found id: ""
	I0819 13:23:59.492480 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.492488 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:59.492494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:59.492544 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:59.527258 2152780 cri.go:89] found id: ""
	I0819 13:23:59.527291 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.527301 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:59.527311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:59.527366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:59.562712 2152780 cri.go:89] found id: ""
	I0819 13:23:59.562744 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.562757 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:59.562765 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:59.562836 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:59.597522 2152780 cri.go:89] found id: ""
	I0819 13:23:59.597557 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.597568 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:59.597577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:59.597649 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:59.638820 2152780 cri.go:89] found id: ""
	I0819 13:23:59.638852 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.638865 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:59.638873 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:59.638941 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:59.682623 2152780 cri.go:89] found id: ""
	I0819 13:23:59.682656 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.682667 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:59.682676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:59.682746 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:59.718731 2152780 cri.go:89] found id: ""
	I0819 13:23:59.718782 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.718795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:59.718808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:59.718881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:59.754799 2152780 cri.go:89] found id: ""
	I0819 13:23:59.754830 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.754840 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:59.754866 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:59.754890 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:59.807650 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:59.807683 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.821699 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:59.821727 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:59.901940 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:59.901962 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:59.901975 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:59.979019 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:59.979055 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:57.804198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:00.303807 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:57.960616 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.961190 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.344403 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:01.344800 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.519279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:02.532541 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:02.532606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:02.566287 2152780 cri.go:89] found id: ""
	I0819 13:24:02.566327 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.566338 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:02.566347 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:02.566423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:02.599874 2152780 cri.go:89] found id: ""
	I0819 13:24:02.599907 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.599920 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:02.599927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:02.600000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:02.633609 2152780 cri.go:89] found id: ""
	I0819 13:24:02.633642 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.633662 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:02.633671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:02.633741 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:02.669105 2152780 cri.go:89] found id: ""
	I0819 13:24:02.669133 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.669142 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:02.669151 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:02.669245 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:02.704719 2152780 cri.go:89] found id: ""
	I0819 13:24:02.704756 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.704769 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:02.704777 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:02.704846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:02.744694 2152780 cri.go:89] found id: ""
	I0819 13:24:02.744726 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.744736 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:02.744743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:02.744818 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.779911 2152780 cri.go:89] found id: ""
	I0819 13:24:02.779945 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.779959 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:02.779969 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:02.780034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:02.815235 2152780 cri.go:89] found id: ""
	I0819 13:24:02.815277 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.815293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:02.815303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:02.815319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:02.864822 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:02.864852 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:02.879626 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:02.879655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:02.951475 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:02.951504 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:02.951522 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:03.031154 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:03.031196 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.573907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:05.586737 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:05.586798 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:05.623912 2152780 cri.go:89] found id: ""
	I0819 13:24:05.623946 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.623957 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:05.623965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:05.624037 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:05.659399 2152780 cri.go:89] found id: ""
	I0819 13:24:05.659427 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.659435 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:05.659441 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:05.659494 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:05.693173 2152780 cri.go:89] found id: ""
	I0819 13:24:05.693207 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.693219 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:05.693228 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:05.693289 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:05.725641 2152780 cri.go:89] found id: ""
	I0819 13:24:05.725670 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.725680 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:05.725688 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:05.725752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:05.759559 2152780 cri.go:89] found id: ""
	I0819 13:24:05.759593 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.759606 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:05.759614 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:05.759686 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:05.793022 2152780 cri.go:89] found id: ""
	I0819 13:24:05.793051 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.793063 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:05.793071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:05.793147 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.304309 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.304491 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.461249 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.962079 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:03.344928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.845172 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.828997 2152780 cri.go:89] found id: ""
	I0819 13:24:05.829027 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.829037 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:05.829045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:05.829111 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:05.862852 2152780 cri.go:89] found id: ""
	I0819 13:24:05.862885 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.862895 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:05.862908 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:05.862924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.901295 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:05.901334 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:05.956106 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:05.956146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:05.971498 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:05.971525 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:06.040630 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:06.040653 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:06.040670 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:08.617393 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:08.631902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:08.631971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:08.669456 2152780 cri.go:89] found id: ""
	I0819 13:24:08.669512 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.669526 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:08.669539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:08.669630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:08.704825 2152780 cri.go:89] found id: ""
	I0819 13:24:08.704857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.704866 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:08.704874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:08.704933 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:08.738841 2152780 cri.go:89] found id: ""
	I0819 13:24:08.738879 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.738893 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:08.738902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:08.738961 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:08.775865 2152780 cri.go:89] found id: ""
	I0819 13:24:08.775899 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.775909 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:08.775918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:08.775981 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:08.813452 2152780 cri.go:89] found id: ""
	I0819 13:24:08.813482 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.813493 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:08.813501 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:08.813572 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:08.850638 2152780 cri.go:89] found id: ""
	I0819 13:24:08.850669 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.850681 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:08.850690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:08.850758 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:08.886750 2152780 cri.go:89] found id: ""
	I0819 13:24:08.886789 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.886801 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:08.886812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:08.886881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:08.922112 2152780 cri.go:89] found id: ""
	I0819 13:24:08.922145 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.922154 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:08.922164 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:08.922181 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:08.997014 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:08.997042 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:08.997059 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:09.074338 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:09.074388 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:09.114430 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:09.114460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:09.169581 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:09.169620 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:06.804761 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.809429 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:07.465846 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:09.961611 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.343646 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:10.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:12.844008 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.685453 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:11.698438 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:11.698509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:11.730814 2152780 cri.go:89] found id: ""
	I0819 13:24:11.730843 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.730852 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:11.730858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:11.730914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:11.768426 2152780 cri.go:89] found id: ""
	I0819 13:24:11.768460 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.768473 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:11.768483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:11.768555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:11.802718 2152780 cri.go:89] found id: ""
	I0819 13:24:11.802753 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.802765 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:11.802774 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:11.802841 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:11.838016 2152780 cri.go:89] found id: ""
	I0819 13:24:11.838058 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.838066 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:11.838073 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:11.838129 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:11.875869 2152780 cri.go:89] found id: ""
	I0819 13:24:11.875896 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.875904 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:11.875911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:11.875967 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:11.911538 2152780 cri.go:89] found id: ""
	I0819 13:24:11.911572 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.911580 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:11.911586 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:11.911640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:11.947762 2152780 cri.go:89] found id: ""
	I0819 13:24:11.947793 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.947803 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:11.947810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:11.947884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:11.989671 2152780 cri.go:89] found id: ""
	I0819 13:24:11.989706 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.989715 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:11.989726 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:11.989744 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:12.054707 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:12.054733 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:12.054749 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:12.137525 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:12.137572 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:12.175806 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:12.175834 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:12.227606 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:12.227651 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:14.743968 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:14.757178 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:14.757252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:14.791379 2152780 cri.go:89] found id: ""
	I0819 13:24:14.791409 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.791421 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:14.791430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:14.791510 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:14.826221 2152780 cri.go:89] found id: ""
	I0819 13:24:14.826250 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.826259 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:14.826265 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:14.826327 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:14.870239 2152780 cri.go:89] found id: ""
	I0819 13:24:14.870269 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.870281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:14.870345 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:14.870450 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:14.907473 2152780 cri.go:89] found id: ""
	I0819 13:24:14.907505 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.907515 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:14.907524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:14.907594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:14.942547 2152780 cri.go:89] found id: ""
	I0819 13:24:14.942585 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.942594 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:14.942600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:14.942658 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:14.982507 2152780 cri.go:89] found id: ""
	I0819 13:24:14.982536 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.982547 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:14.982555 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:14.982627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:15.020572 2152780 cri.go:89] found id: ""
	I0819 13:24:15.020604 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.020612 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:15.020619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:15.020691 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:15.057076 2152780 cri.go:89] found id: ""
	I0819 13:24:15.057105 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.057114 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:15.057123 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:15.057138 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:15.109111 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:15.109149 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:15.124049 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:15.124076 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:15.187413 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:15.187444 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:15.187460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:15.271009 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:15.271049 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:11.303722 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.804077 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:15.804497 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.961757 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.961949 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:14.844944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.344565 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.810205 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:17.823344 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:17.823417 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:17.862704 2152780 cri.go:89] found id: ""
	I0819 13:24:17.862734 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.862743 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:17.862750 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:17.862807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:17.900371 2152780 cri.go:89] found id: ""
	I0819 13:24:17.900404 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.900414 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:17.900420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:17.900476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:17.935518 2152780 cri.go:89] found id: ""
	I0819 13:24:17.935549 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.935559 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:17.935566 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:17.935622 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:17.972592 2152780 cri.go:89] found id: ""
	I0819 13:24:17.972624 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.972637 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:17.972647 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:17.972717 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:18.008636 2152780 cri.go:89] found id: ""
	I0819 13:24:18.008675 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.008686 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:18.008698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:18.008775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:18.040823 2152780 cri.go:89] found id: ""
	I0819 13:24:18.040857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.040866 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:18.040872 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:18.040934 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:18.079443 2152780 cri.go:89] found id: ""
	I0819 13:24:18.079472 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.079481 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:18.079488 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:18.079547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:18.118717 2152780 cri.go:89] found id: ""
	I0819 13:24:18.118746 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.118756 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:18.118766 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:18.118780 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:18.172588 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:18.172631 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:18.186427 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:18.186457 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:18.258534 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:18.258567 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:18.258585 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:18.335882 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:18.335922 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:17.804682 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.805544 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:16.461777 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:18.462005 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:20.960925 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.345299 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:21.843704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.844383 2152365 pod_ready.go:82] duration metric: took 4m0.006413479s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:22.844420 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 13:24:22.844431 2152365 pod_ready.go:39] duration metric: took 4m3.697261125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:22.844451 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:24:22.844496 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:22.844605 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:22.894620 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:22.894650 2152365 cri.go:89] found id: ""
	I0819 13:24:22.894661 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:22.894731 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.899300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:22.899365 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:22.934260 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:22.934285 2152365 cri.go:89] found id: ""
	I0819 13:24:22.934320 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:22.934380 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.939865 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:22.939927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:22.982433 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:22.982464 2152365 cri.go:89] found id: ""
	I0819 13:24:22.982475 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:22.982550 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.987038 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:22.987119 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:23.025231 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.025264 2152365 cri.go:89] found id: ""
	I0819 13:24:23.025275 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:23.025349 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.030567 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:23.030633 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:23.073663 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.073687 2152365 cri.go:89] found id: ""
	I0819 13:24:23.073695 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:23.073753 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.078563 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:23.078646 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:23.114597 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.114621 2152365 cri.go:89] found id: ""
	I0819 13:24:23.114629 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:23.114688 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.119022 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:23.119091 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:23.161263 2152365 cri.go:89] found id: ""
	I0819 13:24:23.161302 2152365 logs.go:276] 0 containers: []
	W0819 13:24:23.161311 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:23.161317 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:23.161373 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:23.211140 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.211167 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:23.211173 2152365 cri.go:89] found id: ""
	I0819 13:24:23.211182 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:23.211271 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.215875 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.220021 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:23.220046 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.283965 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:23.284004 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:20.878687 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:20.891965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:20.892042 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:20.926029 2152780 cri.go:89] found id: ""
	I0819 13:24:20.926057 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.926070 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:20.926077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:20.926142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:20.961025 2152780 cri.go:89] found id: ""
	I0819 13:24:20.961056 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.961067 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:20.961077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:20.961142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:20.993767 2152780 cri.go:89] found id: ""
	I0819 13:24:20.993800 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.993813 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:20.993821 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:20.993886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:21.025250 2152780 cri.go:89] found id: ""
	I0819 13:24:21.025285 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.025294 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:21.025301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:21.025356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:21.057832 2152780 cri.go:89] found id: ""
	I0819 13:24:21.057863 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.057871 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:21.057877 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:21.057930 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:21.092372 2152780 cri.go:89] found id: ""
	I0819 13:24:21.092400 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.092408 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:21.092419 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:21.092486 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:21.126138 2152780 cri.go:89] found id: ""
	I0819 13:24:21.126177 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.126187 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:21.126194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:21.126262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:21.157779 2152780 cri.go:89] found id: ""
	I0819 13:24:21.157806 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.157814 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:21.157823 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:21.157835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:21.240776 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:21.240813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:21.278705 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:21.278742 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:21.331086 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:21.331125 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:21.346399 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:21.346431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:21.424929 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:23.925325 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:23.938286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:23.938380 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:23.974038 2152780 cri.go:89] found id: ""
	I0819 13:24:23.974070 2152780 logs.go:276] 0 containers: []
	W0819 13:24:23.974079 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:23.974086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:23.974144 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:24.007828 2152780 cri.go:89] found id: ""
	I0819 13:24:24.007865 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.007876 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:24.007883 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:24.007951 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:24.042152 2152780 cri.go:89] found id: ""
	I0819 13:24:24.042182 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.042192 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:24.042198 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:24.042252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:24.076334 2152780 cri.go:89] found id: ""
	I0819 13:24:24.076366 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.076376 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:24.076382 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:24.076451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:24.116333 2152780 cri.go:89] found id: ""
	I0819 13:24:24.116368 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.116380 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:24.116388 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:24.116449 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:24.157126 2152780 cri.go:89] found id: ""
	I0819 13:24:24.157154 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.157163 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:24.157170 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:24.157224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:24.197124 2152780 cri.go:89] found id: ""
	I0819 13:24:24.197153 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.197161 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:24.197167 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:24.197224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:24.232662 2152780 cri.go:89] found id: ""
	I0819 13:24:24.232690 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.232698 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:24.232709 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:24.232721 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.310535 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.310573 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.351436 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:24.351469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:24.409022 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.409061 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.423392 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.423426 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:24.499638 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:22.305024 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:24.305228 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.961886 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:25.460596 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.000428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:27.016949 2152780 kubeadm.go:597] duration metric: took 4m2.883005064s to restartPrimaryControlPlane
	W0819 13:24:27.017029 2152780 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:27.017066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:27.478402 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:27.494351 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:24:27.505001 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:24:27.515444 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:24:27.515467 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:24:27.515523 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:24:27.524646 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:24:27.524697 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:24:27.535343 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:24:27.544402 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:24:27.544456 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:24:27.554655 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.564636 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:24:27.564709 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.575078 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:24:27.584856 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:24:27.584924 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:24:27.595099 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:24:27.675533 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:24:27.675638 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:24:27.827613 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:24:27.827770 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:24:27.827924 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:24:28.032127 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:24:23.323805 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:23.323834 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:23.400053 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:23.400107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:23.438155 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:23.438196 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.483737 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:23.483772 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:23.536000 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:23.536042 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.574211 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:23.574258 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.620141 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:23.620203 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.130515 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.130556 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.185924 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.185959 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.202306 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.202348 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:24.342635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:24.342675 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.895433 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:26.911884 2152365 api_server.go:72] duration metric: took 4m15.522363036s to wait for apiserver process to appear ...
	I0819 13:24:26.911914 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:24:26.911959 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:26.912026 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:26.949386 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.949423 2152365 cri.go:89] found id: ""
	I0819 13:24:26.949436 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:26.949509 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.954854 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:26.954937 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:26.993702 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:26.993732 2152365 cri.go:89] found id: ""
	I0819 13:24:26.993746 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:26.993806 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.997800 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:26.997873 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:27.044488 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:27.044521 2152365 cri.go:89] found id: ""
	I0819 13:24:27.044532 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:27.044612 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.049411 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:27.049488 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:27.098304 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.098328 2152365 cri.go:89] found id: ""
	I0819 13:24:27.098336 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:27.098388 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.102393 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:27.102448 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:27.138927 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.138952 2152365 cri.go:89] found id: ""
	I0819 13:24:27.138960 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:27.139019 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.143058 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:27.143113 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:27.183816 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:27.183846 2152365 cri.go:89] found id: ""
	I0819 13:24:27.183856 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:27.183918 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.188818 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:27.188880 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:27.231255 2152365 cri.go:89] found id: ""
	I0819 13:24:27.231286 2152365 logs.go:276] 0 containers: []
	W0819 13:24:27.231294 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:27.231300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:27.231352 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:27.265165 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.265189 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:27.265194 2152365 cri.go:89] found id: ""
	I0819 13:24:27.265204 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:27.265267 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.269294 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.273771 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:27.273796 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.315407 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:27.315446 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.359309 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:27.359343 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:27.817261 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:27.817316 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:27.834635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:27.834676 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:27.888137 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:27.888194 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:27.940367 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:27.940402 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.981404 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:27.981448 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:28.026666 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:28.026702 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:28.084387 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:28.084419 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:28.160615 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:28.160667 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:28.276547 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:28.276584 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:28.033928 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:24:28.034048 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:24:28.034129 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:24:28.034243 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:24:28.034320 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:24:28.034497 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:24:28.034852 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:24:28.035397 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:24:28.035736 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:24:28.036257 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:24:28.036850 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:24:28.037014 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:24:28.037102 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:24:28.386155 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:24:28.571457 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:24:28.835239 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:24:28.967421 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:24:28.990941 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:24:28.991082 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:24:28.991148 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:24:29.149979 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:24:29.151782 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:24:29.151943 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:24:29.161108 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:24:29.161209 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:24:29.161316 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:24:29.172129 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:24:26.804321 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.304110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.461339 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.462208 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:28.316651 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:28.316701 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:30.876679 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:24:30.882671 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:24:30.883817 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:24:30.883842 2152365 api_server.go:131] duration metric: took 3.971920414s to wait for apiserver health ...
	I0819 13:24:30.883850 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:24:30.883876 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:30.883927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:30.921225 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:30.921255 2152365 cri.go:89] found id: ""
	I0819 13:24:30.921266 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:30.921336 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.925961 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:30.926033 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:30.963834 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:30.963861 2152365 cri.go:89] found id: ""
	I0819 13:24:30.963872 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:30.963947 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.968020 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:30.968085 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:31.004690 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.004717 2152365 cri.go:89] found id: ""
	I0819 13:24:31.004726 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:31.004784 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.008770 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:31.008832 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:31.045412 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.045440 2152365 cri.go:89] found id: ""
	I0819 13:24:31.045451 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:31.045508 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.049682 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:31.049759 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:31.090853 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.090946 2152365 cri.go:89] found id: ""
	I0819 13:24:31.090958 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:31.091037 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.095245 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:31.095311 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:31.138661 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.138693 2152365 cri.go:89] found id: ""
	I0819 13:24:31.138705 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:31.138775 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.143216 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:31.143283 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:31.186554 2152365 cri.go:89] found id: ""
	I0819 13:24:31.186580 2152365 logs.go:276] 0 containers: []
	W0819 13:24:31.186589 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:31.186595 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:31.186645 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:31.226218 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.226257 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:31.226261 2152365 cri.go:89] found id: ""
	I0819 13:24:31.226270 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:31.226333 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.230602 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.234269 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:31.234290 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:31.299071 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:31.299107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.336594 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:31.336633 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:31.745390 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:31.745441 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:31.797446 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:31.797479 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.838882 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:31.838916 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.876830 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:31.876871 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.935426 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:31.935461 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.975182 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:31.975212 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:31.989631 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:31.989663 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:32.093154 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:32.093190 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:32.142869 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:32.142906 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:32.179008 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:32.179043 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:34.726569 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:24:34.726604 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.726609 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.726613 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.726619 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.726622 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.726625 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.726633 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.726636 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.726644 2152365 system_pods.go:74] duration metric: took 3.84278789s to wait for pod list to return data ...
	I0819 13:24:34.726654 2152365 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:24:34.729710 2152365 default_sa.go:45] found service account: "default"
	I0819 13:24:34.729745 2152365 default_sa.go:55] duration metric: took 3.082291ms for default service account to be created ...
	I0819 13:24:34.729758 2152365 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:24:34.734698 2152365 system_pods.go:86] 8 kube-system pods found
	I0819 13:24:34.734734 2152365 system_pods.go:89] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.734744 2152365 system_pods.go:89] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.734752 2152365 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.734758 2152365 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.734765 2152365 system_pods.go:89] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.734768 2152365 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.734775 2152365 system_pods.go:89] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.734784 2152365 system_pods.go:89] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.734792 2152365 system_pods.go:126] duration metric: took 5.027633ms to wait for k8s-apps to be running ...
	I0819 13:24:34.734802 2152365 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:24:34.734863 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:34.756925 2152365 system_svc.go:56] duration metric: took 22.110614ms WaitForService to wait for kubelet
	I0819 13:24:34.756964 2152365 kubeadm.go:582] duration metric: took 4m23.367451416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:24:34.757007 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:24:34.760465 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:24:34.760497 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:24:34.760510 2152365 node_conditions.go:105] duration metric: took 3.498566ms to run NodePressure ...
	I0819 13:24:34.760524 2152365 start.go:241] waiting for startup goroutines ...
	I0819 13:24:34.760531 2152365 start.go:246] waiting for cluster config update ...
	I0819 13:24:34.760552 2152365 start.go:255] writing updated cluster config ...
	I0819 13:24:34.760888 2152365 ssh_runner.go:195] Run: rm -f paused
	I0819 13:24:34.814139 2152365 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:24:34.815906 2152365 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-038572" cluster and "default" namespace by default
	I0819 13:24:31.305059 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.804408 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.804797 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:31.462264 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.961404 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.961719 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.305696 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.804500 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.460369 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.462138 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.804768 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.804803 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.462412 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.961368 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:46.804973 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.304762 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:47.460607 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.461589 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.804815 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:54.304854 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.961120 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:53.962098 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:55.962219 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:56.804112 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:57.298511 2153523 pod_ready.go:82] duration metric: took 4m0.000785617s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:57.298548 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:24:57.298569 2153523 pod_ready.go:39] duration metric: took 4m6.439346081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:57.298603 2153523 kubeadm.go:597] duration metric: took 4m16.103432347s to restartPrimaryControlPlane
	W0819 13:24:57.298676 2153523 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:57.298707 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:58.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:00.961514 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:03.460885 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:05.461736 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:09.173035 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:25:09.173672 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:09.173913 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:07.962080 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:10.461273 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:14.174468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:14.174714 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:12.961725 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:15.461361 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:17.461731 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:19.961772 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:23.413286 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.114546859s)
	I0819 13:25:23.413389 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:23.440190 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:25:23.450707 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:25:23.462984 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:25:23.463003 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:25:23.463046 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:25:23.472381 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:25:23.472437 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:25:23.482090 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:25:23.491789 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:25:23.491845 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:25:23.501722 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.510879 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:25:23.510936 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.520464 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:25:23.529582 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:25:23.529642 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:25:23.539089 2153523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:25:23.585240 2153523 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:25:23.585319 2153523 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:25:23.712867 2153523 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:25:23.712994 2153523 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:25:23.713118 2153523 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:25:23.722519 2153523 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:25:23.724354 2153523 out.go:235]   - Generating certificates and keys ...
	I0819 13:25:23.724436 2153523 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:25:23.724517 2153523 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:25:23.724598 2153523 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:25:23.724648 2153523 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:25:23.724702 2153523 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:25:23.724745 2153523 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:25:23.724795 2153523 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:25:23.724847 2153523 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:25:23.724904 2153523 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:25:23.725051 2153523 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:25:23.725125 2153523 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:25:23.725238 2153523 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:25:23.779977 2153523 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:25:23.930648 2153523 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:25:24.124057 2153523 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:25:24.314884 2153523 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:25:24.494351 2153523 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:25:24.495024 2153523 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:25:24.497778 2153523 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:25:24.175179 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:24.175432 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:24.499543 2153523 out.go:235]   - Booting up control plane ...
	I0819 13:25:24.499671 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:25:24.499762 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:25:24.499864 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:25:24.520379 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:25:24.532777 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:25:24.532853 2153523 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:25:24.666433 2153523 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:25:24.666580 2153523 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:25:25.168265 2153523 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.973317ms
	I0819 13:25:25.168373 2153523 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:25:21.961823 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:24.462014 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:30.169290 2153523 kubeadm.go:310] [api-check] The API server is healthy after 5.001301539s
	I0819 13:25:30.182039 2153523 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:25:30.198499 2153523 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:25:30.224273 2153523 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:25:30.224513 2153523 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-823470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:25:30.236397 2153523 kubeadm.go:310] [bootstrap-token] Using token: uoaia1.lo1g4dbd1uwo2t3t
	I0819 13:25:30.237712 2153523 out.go:235]   - Configuring RBAC rules ...
	I0819 13:25:30.237846 2153523 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:25:30.245121 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:25:30.254416 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:25:30.257701 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:25:30.262104 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:25:30.266592 2153523 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:25:30.577306 2153523 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:25:26.961302 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:28.962130 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:31.051204 2153523 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:25:31.576457 2153523 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:25:31.577557 2153523 kubeadm.go:310] 
	I0819 13:25:31.577644 2153523 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:25:31.577653 2153523 kubeadm.go:310] 
	I0819 13:25:31.577728 2153523 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:25:31.577738 2153523 kubeadm.go:310] 
	I0819 13:25:31.577786 2153523 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:25:31.577872 2153523 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:25:31.577949 2153523 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:25:31.577959 2153523 kubeadm.go:310] 
	I0819 13:25:31.578041 2153523 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:25:31.578051 2153523 kubeadm.go:310] 
	I0819 13:25:31.578111 2153523 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:25:31.578120 2153523 kubeadm.go:310] 
	I0819 13:25:31.578174 2153523 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:25:31.578266 2153523 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:25:31.578354 2153523 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:25:31.578383 2153523 kubeadm.go:310] 
	I0819 13:25:31.578524 2153523 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:25:31.578624 2153523 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:25:31.578641 2153523 kubeadm.go:310] 
	I0819 13:25:31.578765 2153523 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.578886 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:25:31.578922 2153523 kubeadm.go:310] 	--control-plane 
	I0819 13:25:31.578931 2153523 kubeadm.go:310] 
	I0819 13:25:31.579045 2153523 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:25:31.579055 2153523 kubeadm.go:310] 
	I0819 13:25:31.579151 2153523 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.579277 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:25:31.580242 2153523 kubeadm.go:310] W0819 13:25:23.534803    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580611 2153523 kubeadm.go:310] W0819 13:25:23.535422    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580758 2153523 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:25:31.580793 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:25:31.580803 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:25:31.582322 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:25:31.583520 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:25:31.594967 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:25:31.613491 2153523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:25:31.613590 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-823470 minikube.k8s.io/updated_at=2024_08_19T13_25_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=embed-certs-823470 minikube.k8s.io/primary=true
	I0819 13:25:31.613591 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.644190 2153523 ops.go:34] apiserver oom_adj: -16
	I0819 13:25:31.824270 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.324994 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.824955 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.325186 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.824797 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.325222 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.824306 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.325248 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.825154 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.461327 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:33.461401 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:35.455588 2151869 pod_ready.go:82] duration metric: took 4m0.000741427s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	E0819 13:25:35.455630 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:25:35.455654 2151869 pod_ready.go:39] duration metric: took 4m14.499879559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:35.455689 2151869 kubeadm.go:597] duration metric: took 4m22.29129413s to restartPrimaryControlPlane
	W0819 13:25:35.455763 2151869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:25:35.455798 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:25:36.324463 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:36.464343 2153523 kubeadm.go:1113] duration metric: took 4.850819101s to wait for elevateKubeSystemPrivileges
	I0819 13:25:36.464399 2153523 kubeadm.go:394] duration metric: took 4m55.327792066s to StartCluster
	I0819 13:25:36.464441 2153523 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.464553 2153523 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:25:36.466407 2153523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.466694 2153523 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:25:36.466786 2153523 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:25:36.466883 2153523 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-823470"
	I0819 13:25:36.466903 2153523 addons.go:69] Setting default-storageclass=true in profile "embed-certs-823470"
	I0819 13:25:36.466908 2153523 addons.go:69] Setting metrics-server=true in profile "embed-certs-823470"
	I0819 13:25:36.466924 2153523 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-823470"
	W0819 13:25:36.466935 2153523 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:25:36.466943 2153523 addons.go:234] Setting addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:36.466949 2153523 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-823470"
	W0819 13:25:36.466961 2153523 addons.go:243] addon metrics-server should already be in state true
	I0819 13:25:36.466968 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:25:36.466987 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.466998 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.467388 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467421 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467422 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467424 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.468449 2153523 out.go:177] * Verifying Kubernetes components...
	I0819 13:25:36.469946 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:25:36.484726 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0819 13:25:36.484740 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I0819 13:25:36.484824 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0819 13:25:36.485224 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485280 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485303 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485822 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485841 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485824 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485881 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485956 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485982 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.486206 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486260 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486267 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486382 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.486787 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486803 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486819 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.486836 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.490147 2153523 addons.go:234] Setting addon default-storageclass=true in "embed-certs-823470"
	W0819 13:25:36.490169 2153523 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:25:36.490198 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.490599 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.490628 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.506245 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
	I0819 13:25:36.506573 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0819 13:25:36.506955 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507024 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507467 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507485 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507599 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507619 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507896 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.507899 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.508055 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.508657 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.508682 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.509546 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.511452 2153523 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:25:36.512660 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:25:36.512680 2153523 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:25:36.512702 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.515934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516401 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.516422 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516608 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.516810 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.517007 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.517160 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.517459 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0819 13:25:36.517826 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.518316 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.518339 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.518689 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.518859 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.520336 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.521729 2153523 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:25:36.522775 2153523 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.522793 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:25:36.522813 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.525839 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526467 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.526505 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526736 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.527191 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.527384 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.527540 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.527918 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0819 13:25:36.528354 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.528868 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.528890 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.529243 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.529418 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.530881 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.531125 2153523 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.531138 2153523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:25:36.531151 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.534168 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534579 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.534603 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534703 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.534855 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.534995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.535105 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.723562 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:25:36.740697 2153523 node_ready.go:35] waiting up to 6m0s for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749268 2153523 node_ready.go:49] node "embed-certs-823470" has status "Ready":"True"
	I0819 13:25:36.749294 2153523 node_ready.go:38] duration metric: took 8.560448ms for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749306 2153523 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:36.756208 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:36.829190 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.845456 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:25:36.845487 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:25:36.857828 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.928557 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:25:36.928590 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:25:36.979832 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:36.979864 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:25:37.086105 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:37.498852 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.498909 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498924 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499189 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499210 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499218 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499225 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499393 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499392 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499405 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499419 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499426 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499762 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499774 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499771 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499761 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499815 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499835 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510580 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.510601 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.510894 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.510922 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510924 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.721765 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.721791 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722119 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722143 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722171 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.722180 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722502 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722513 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722525 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722538 2153523 addons.go:475] Verifying addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:37.724029 2153523 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:25:37.725177 2153523 addons.go:510] duration metric: took 1.258399079s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:25:38.763132 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:40.765558 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:44.175962 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:44.176268 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:43.263762 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:45.765551 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:46.265160 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.265189 2153523 pod_ready.go:82] duration metric: took 9.50894907s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.265202 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270706 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.270733 2153523 pod_ready.go:82] duration metric: took 5.521974ms for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270745 2153523 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275061 2153523 pod_ready.go:93] pod "etcd-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.275095 2153523 pod_ready.go:82] duration metric: took 4.331701ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275106 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280251 2153523 pod_ready.go:93] pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.280269 2153523 pod_ready.go:82] duration metric: took 5.157209ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280278 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284437 2153523 pod_ready.go:93] pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.284456 2153523 pod_ready.go:82] duration metric: took 4.173028ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284464 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660549 2153523 pod_ready.go:93] pod "kube-proxy-cn9sl" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.660577 2153523 pod_ready.go:82] duration metric: took 376.106459ms for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660588 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060405 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:47.060432 2153523 pod_ready.go:82] duration metric: took 399.837833ms for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060440 2153523 pod_ready.go:39] duration metric: took 10.311121311s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:47.060458 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:25:47.060513 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:25:47.076832 2153523 api_server.go:72] duration metric: took 10.610087948s to wait for apiserver process to appear ...
	I0819 13:25:47.076869 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:25:47.076894 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:25:47.081037 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:25:47.081960 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:25:47.081984 2153523 api_server.go:131] duration metric: took 5.106291ms to wait for apiserver health ...
	I0819 13:25:47.081993 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:25:47.264787 2153523 system_pods.go:59] 9 kube-system pods found
	I0819 13:25:47.264817 2153523 system_pods.go:61] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.264822 2153523 system_pods.go:61] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.264826 2153523 system_pods.go:61] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.264830 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.264835 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.264839 2153523 system_pods.go:61] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.264842 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.264849 2153523 system_pods.go:61] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.264853 2153523 system_pods.go:61] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.264863 2153523 system_pods.go:74] duration metric: took 182.863608ms to wait for pod list to return data ...
	I0819 13:25:47.264870 2153523 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:25:47.461938 2153523 default_sa.go:45] found service account: "default"
	I0819 13:25:47.461968 2153523 default_sa.go:55] duration metric: took 197.091755ms for default service account to be created ...
	I0819 13:25:47.461978 2153523 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:25:47.664070 2153523 system_pods.go:86] 9 kube-system pods found
	I0819 13:25:47.664101 2153523 system_pods.go:89] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.664106 2153523 system_pods.go:89] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.664110 2153523 system_pods.go:89] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.664115 2153523 system_pods.go:89] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.664119 2153523 system_pods.go:89] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.664122 2153523 system_pods.go:89] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.664125 2153523 system_pods.go:89] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.664131 2153523 system_pods.go:89] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.664135 2153523 system_pods.go:89] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.664146 2153523 system_pods.go:126] duration metric: took 202.162032ms to wait for k8s-apps to be running ...
	I0819 13:25:47.664157 2153523 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:25:47.664259 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:47.680000 2153523 system_svc.go:56] duration metric: took 15.832426ms WaitForService to wait for kubelet
	I0819 13:25:47.680036 2153523 kubeadm.go:582] duration metric: took 11.213303025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:25:47.680063 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:25:47.861730 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:25:47.861762 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:25:47.861774 2153523 node_conditions.go:105] duration metric: took 181.706418ms to run NodePressure ...
	I0819 13:25:47.861786 2153523 start.go:241] waiting for startup goroutines ...
	I0819 13:25:47.861793 2153523 start.go:246] waiting for cluster config update ...
	I0819 13:25:47.861804 2153523 start.go:255] writing updated cluster config ...
	I0819 13:25:47.862066 2153523 ssh_runner.go:195] Run: rm -f paused
	I0819 13:25:47.913161 2153523 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:25:47.915029 2153523 out.go:177] * Done! kubectl is now configured to use "embed-certs-823470" cluster and "default" namespace by default
	I0819 13:26:01.591066 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.135241188s)
	I0819 13:26:01.591140 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:01.620641 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:26:01.633612 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:01.647057 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:01.647085 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:01.647144 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:01.663839 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:01.663903 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:01.681570 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:01.692967 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:01.693038 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:01.714743 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.723974 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:01.724044 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.734044 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:01.743276 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:01.743326 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:01.752586 2151869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:01.798481 2151869 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:26:01.798564 2151869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:01.909646 2151869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:01.909801 2151869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:01.909923 2151869 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:26:01.923525 2151869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:01.925556 2151869 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:01.925670 2151869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:01.925760 2151869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:01.925900 2151869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:01.925991 2151869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:01.926102 2151869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:01.926162 2151869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:01.926248 2151869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:01.926334 2151869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:01.926449 2151869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:01.926567 2151869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:01.926632 2151869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:01.926707 2151869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:02.049009 2151869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:02.310249 2151869 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:26:02.425638 2151869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:02.664112 2151869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:03.165747 2151869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:03.166413 2151869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:03.169075 2151869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:03.170768 2151869 out.go:235]   - Booting up control plane ...
	I0819 13:26:03.170893 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:03.171003 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:03.171184 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:03.190081 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:03.197988 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:03.198058 2151869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:03.327052 2151869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:26:03.327180 2151869 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:26:04.332623 2151869 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002268296s
	I0819 13:26:04.332744 2151869 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:26:09.332759 2151869 kubeadm.go:310] [api-check] The API server is healthy after 5.001673024s
	I0819 13:26:09.344645 2151869 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:26:09.359806 2151869 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:26:09.393264 2151869 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:26:09.393499 2151869 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-441627 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:26:09.413036 2151869 kubeadm.go:310] [bootstrap-token] Using token: cva6zd.b3089a5fajgqfd6r
	I0819 13:26:09.414921 2151869 out.go:235]   - Configuring RBAC rules ...
	I0819 13:26:09.415128 2151869 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:26:09.420470 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:26:09.431168 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:26:09.438188 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:26:09.443406 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:26:09.450497 2151869 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:26:09.737815 2151869 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:26:10.165062 2151869 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:26:10.738017 2151869 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:26:10.739202 2151869 kubeadm.go:310] 
	I0819 13:26:10.739309 2151869 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:26:10.739321 2151869 kubeadm.go:310] 
	I0819 13:26:10.739415 2151869 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:26:10.739426 2151869 kubeadm.go:310] 
	I0819 13:26:10.739469 2151869 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:26:10.739558 2151869 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:26:10.739645 2151869 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:26:10.739662 2151869 kubeadm.go:310] 
	I0819 13:26:10.739746 2151869 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:26:10.739758 2151869 kubeadm.go:310] 
	I0819 13:26:10.739833 2151869 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:26:10.739842 2151869 kubeadm.go:310] 
	I0819 13:26:10.739926 2151869 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:26:10.740039 2151869 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:26:10.740116 2151869 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:26:10.740134 2151869 kubeadm.go:310] 
	I0819 13:26:10.740272 2151869 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:26:10.740376 2151869 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:26:10.740387 2151869 kubeadm.go:310] 
	I0819 13:26:10.740487 2151869 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.740651 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:26:10.740689 2151869 kubeadm.go:310] 	--control-plane 
	I0819 13:26:10.740699 2151869 kubeadm.go:310] 
	I0819 13:26:10.740808 2151869 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:26:10.740822 2151869 kubeadm.go:310] 
	I0819 13:26:10.740931 2151869 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.741086 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:26:10.743032 2151869 kubeadm.go:310] W0819 13:26:01.774566    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743353 2151869 kubeadm.go:310] W0819 13:26:01.775291    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743487 2151869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:10.743516 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:26:10.743530 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:26:10.745175 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:26:10.746234 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:26:10.756423 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:26:10.774266 2151869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:26:10.774332 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:10.774349 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-441627 minikube.k8s.io/updated_at=2024_08_19T13_26_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=no-preload-441627 minikube.k8s.io/primary=true
	I0819 13:26:10.800516 2151869 ops.go:34] apiserver oom_adj: -16
	I0819 13:26:10.991369 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.492114 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.992301 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.491433 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.991812 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.492015 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.991704 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.492129 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.992304 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:15.079535 2151869 kubeadm.go:1113] duration metric: took 4.305270284s to wait for elevateKubeSystemPrivileges
	I0819 13:26:15.079607 2151869 kubeadm.go:394] duration metric: took 5m1.9669805s to StartCluster
	I0819 13:26:15.079635 2151869 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.079759 2151869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:26:15.081957 2151869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.082240 2151869 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:26:15.082282 2151869 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:26:15.082374 2151869 addons.go:69] Setting storage-provisioner=true in profile "no-preload-441627"
	I0819 13:26:15.082413 2151869 addons.go:234] Setting addon storage-provisioner=true in "no-preload-441627"
	I0819 13:26:15.082409 2151869 addons.go:69] Setting metrics-server=true in profile "no-preload-441627"
	W0819 13:26:15.082425 2151869 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:26:15.082389 2151869 addons.go:69] Setting default-storageclass=true in profile "no-preload-441627"
	I0819 13:26:15.082449 2151869 addons.go:234] Setting addon metrics-server=true in "no-preload-441627"
	I0819 13:26:15.082461 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	W0819 13:26:15.082463 2151869 addons.go:243] addon metrics-server should already be in state true
	I0819 13:26:15.082479 2151869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-441627"
	I0819 13:26:15.082503 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.082514 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082869 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082920 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082925 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082947 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.083691 2151869 out.go:177] * Verifying Kubernetes components...
	I0819 13:26:15.085493 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:26:15.101376 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
	I0819 13:26:15.102091 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.102685 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.102703 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.103169 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.103181 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0819 13:26:15.103193 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0819 13:26:15.103361 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.103663 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.103732 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.104300 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104322 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.104710 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104730 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.105081 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105409 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105905 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.105938 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.106084 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.106119 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.107248 2151869 addons.go:234] Setting addon default-storageclass=true in "no-preload-441627"
	W0819 13:26:15.107271 2151869 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:26:15.107302 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.107654 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.107713 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.122709 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0819 13:26:15.123128 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.123869 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.123898 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.124416 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.124634 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.126217 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I0819 13:26:15.126791 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.127400 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.127419 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.127698 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.128022 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.128336 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.129232 2151869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:26:15.130058 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.130226 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:26:15.130243 2151869 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:26:15.130262 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.131335 2151869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:26:15.131652 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45447
	I0819 13:26:15.132227 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.132570 2151869 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.132586 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:26:15.132605 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.133517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134218 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.134244 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134600 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.134738 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.134755 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.134852 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.135030 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.135097 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.135284 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.135673 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.135702 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.136748 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137177 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.137197 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137442 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.137613 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.137765 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.137924 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.154937 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0819 13:26:15.155407 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.155835 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.155847 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.156238 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.156416 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.158575 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.158795 2151869 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.158805 2151869 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:26:15.158818 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.162078 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162519 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.162534 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162825 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.162977 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.163114 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.163198 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.309062 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:26:15.341525 2151869 node_ready.go:35] waiting up to 6m0s for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354270 2151869 node_ready.go:49] node "no-preload-441627" has status "Ready":"True"
	I0819 13:26:15.354307 2151869 node_ready.go:38] duration metric: took 12.742932ms for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354320 2151869 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:15.363385 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:15.427138 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.429228 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.517567 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:26:15.517597 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:26:15.626356 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:26:15.626393 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:26:15.677045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677072 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677414 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677471 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.677483 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677490 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677729 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677749 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.683924 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.683943 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.684224 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.684249 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.684252 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:15.701949 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:15.701980 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:26:15.743920 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:16.110007 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110035 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110393 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110415 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.110426 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110439 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110691 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110712 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426076 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426617 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.426639 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426650 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426660 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426669 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426948 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426994 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.427011 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.427024 2151869 addons.go:475] Verifying addon metrics-server=true in "no-preload-441627"
	I0819 13:26:16.428916 2151869 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 13:26:16.430020 2151869 addons.go:510] duration metric: took 1.347734507s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 13:26:17.369079 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:19.369662 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:21.369721 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:23.368863 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.368886 2151869 pod_ready.go:82] duration metric: took 8.005467546s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.368895 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373851 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.373873 2151869 pod_ready.go:82] duration metric: took 4.971429ms for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373884 2151869 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378381 2151869 pod_ready.go:93] pod "etcd-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.378405 2151869 pod_ready.go:82] duration metric: took 4.513601ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378417 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382700 2151869 pod_ready.go:93] pod "kube-apiserver-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.382719 2151869 pod_ready.go:82] duration metric: took 4.295583ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382728 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387145 2151869 pod_ready.go:93] pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.387168 2151869 pod_ready.go:82] duration metric: took 4.434097ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387179 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768694 2151869 pod_ready.go:93] pod "kube-proxy-9jhss" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.768722 2151869 pod_ready.go:82] duration metric: took 381.534902ms for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768735 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167861 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:24.167888 2151869 pod_ready.go:82] duration metric: took 399.144976ms for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167896 2151869 pod_ready.go:39] duration metric: took 8.81356327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:24.167913 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:26:24.167967 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:26:24.187913 2151869 api_server.go:72] duration metric: took 9.105632249s to wait for apiserver process to appear ...
	I0819 13:26:24.187938 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:26:24.187960 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:26:24.192758 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:26:24.193797 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:26:24.193822 2151869 api_server.go:131] duration metric: took 5.87693ms to wait for apiserver health ...
	I0819 13:26:24.193833 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:26:24.371261 2151869 system_pods.go:59] 9 kube-system pods found
	I0819 13:26:24.371291 2151869 system_pods.go:61] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.371296 2151869 system_pods.go:61] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.371300 2151869 system_pods.go:61] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.371304 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.371308 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.371312 2151869 system_pods.go:61] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.371317 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.371325 2151869 system_pods.go:61] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.371331 2151869 system_pods.go:61] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.371342 2151869 system_pods.go:74] duration metric: took 177.503046ms to wait for pod list to return data ...
	I0819 13:26:24.371355 2151869 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:26:24.568039 2151869 default_sa.go:45] found service account: "default"
	I0819 13:26:24.568068 2151869 default_sa.go:55] duration metric: took 196.702869ms for default service account to be created ...
	I0819 13:26:24.568078 2151869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:26:24.771705 2151869 system_pods.go:86] 9 kube-system pods found
	I0819 13:26:24.771734 2151869 system_pods.go:89] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.771741 2151869 system_pods.go:89] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.771745 2151869 system_pods.go:89] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.771749 2151869 system_pods.go:89] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.771753 2151869 system_pods.go:89] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.771756 2151869 system_pods.go:89] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.771759 2151869 system_pods.go:89] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.771765 2151869 system_pods.go:89] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.771770 2151869 system_pods.go:89] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.771781 2151869 system_pods.go:126] duration metric: took 203.697331ms to wait for k8s-apps to be running ...
	I0819 13:26:24.771794 2151869 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:26:24.771837 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.788088 2151869 system_svc.go:56] duration metric: took 16.285619ms WaitForService to wait for kubelet
	I0819 13:26:24.788122 2151869 kubeadm.go:582] duration metric: took 9.705841529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:26:24.788157 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:26:24.968341 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:26:24.968377 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:26:24.968392 2151869 node_conditions.go:105] duration metric: took 180.211418ms to run NodePressure ...
	I0819 13:26:24.968409 2151869 start.go:241] waiting for startup goroutines ...
	I0819 13:26:24.968418 2151869 start.go:246] waiting for cluster config update ...
	I0819 13:26:24.968429 2151869 start.go:255] writing updated cluster config ...
	I0819 13:26:24.968747 2151869 ssh_runner.go:195] Run: rm -f paused
	I0819 13:26:25.025595 2151869 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:26:25.027536 2151869 out.go:177] * Done! kubectl is now configured to use "no-preload-441627" cluster and "default" namespace by default
	I0819 13:26:24.177907 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:26:24.178201 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:26:24.178214 2152780 kubeadm.go:310] 
	I0819 13:26:24.178291 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:26:24.178372 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:26:24.178382 2152780 kubeadm.go:310] 
	I0819 13:26:24.178424 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:26:24.178476 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:26:24.178656 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:26:24.178674 2152780 kubeadm.go:310] 
	I0819 13:26:24.178792 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:26:24.178834 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:26:24.178879 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:26:24.178885 2152780 kubeadm.go:310] 
	I0819 13:26:24.179039 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:26:24.179150 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:26:24.179160 2152780 kubeadm.go:310] 
	I0819 13:26:24.179343 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:26:24.179495 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:26:24.179612 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:26:24.179737 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:26:24.179753 2152780 kubeadm.go:310] 
	I0819 13:26:24.180394 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:24.180518 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:26:24.180601 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 13:26:24.180756 2152780 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 13:26:24.180807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:26:24.684112 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.697957 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:24.707579 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:24.707600 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:24.707649 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:24.717570 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:24.717642 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:24.727069 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:24.735692 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:24.735757 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:24.744522 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.753036 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:24.753090 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.763455 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:24.772780 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:24.772839 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:24.782898 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:24.856625 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:26:24.856732 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:25.003965 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:25.004118 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:25.004294 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:26:25.205556 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:25.207347 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:25.207462 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:25.207553 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:25.207671 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:25.207759 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:25.207840 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:25.207910 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:25.207978 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:25.208063 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:25.208526 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:25.209357 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:25.209668 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:25.209816 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:25.424000 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:25.667776 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:25.723895 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:25.817349 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:25.834327 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:25.836860 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:25.837016 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:25.992443 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:25.994085 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:26:25.994223 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:26.007880 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:26.009553 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:26.013310 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:26.014264 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:27:06.016907 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:27:06.017042 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:06.017274 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:11.017468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:11.017758 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:21.018328 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:21.018585 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:41.019394 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:41.019632 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018489 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:28:21.018784 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018807 2152780 kubeadm.go:310] 
	I0819 13:28:21.018879 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:28:21.018963 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:28:21.018971 2152780 kubeadm.go:310] 
	I0819 13:28:21.019000 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:28:21.019030 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:28:21.019122 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:28:21.019130 2152780 kubeadm.go:310] 
	I0819 13:28:21.019233 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:28:21.019276 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:28:21.019329 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:28:21.019337 2152780 kubeadm.go:310] 
	I0819 13:28:21.019471 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:28:21.019607 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:28:21.019620 2152780 kubeadm.go:310] 
	I0819 13:28:21.019776 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:28:21.019895 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:28:21.020016 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:28:21.020125 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:28:21.020139 2152780 kubeadm.go:310] 
	I0819 13:28:21.020943 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:28:21.021058 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:28:21.021158 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 13:28:21.021292 2152780 kubeadm.go:394] duration metric: took 7m56.942541209s to StartCluster
	I0819 13:28:21.021375 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:28:21.021456 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:28:21.068572 2152780 cri.go:89] found id: ""
	I0819 13:28:21.068613 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.068624 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:28:21.068633 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:28:21.068704 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:28:21.108484 2152780 cri.go:89] found id: ""
	I0819 13:28:21.108514 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.108532 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:28:21.108540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:28:21.108611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:28:21.151863 2152780 cri.go:89] found id: ""
	I0819 13:28:21.151894 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.151905 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:28:21.151914 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:28:21.151992 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:28:21.191844 2152780 cri.go:89] found id: ""
	I0819 13:28:21.191873 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.191883 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:28:21.191891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:28:21.191970 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:28:21.230545 2152780 cri.go:89] found id: ""
	I0819 13:28:21.230580 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.230588 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:28:21.230595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:28:21.230650 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:28:21.264157 2152780 cri.go:89] found id: ""
	I0819 13:28:21.264211 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.264225 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:28:21.264233 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:28:21.264301 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:28:21.297523 2152780 cri.go:89] found id: ""
	I0819 13:28:21.297558 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.297566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:28:21.297573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:28:21.297630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:28:21.336765 2152780 cri.go:89] found id: ""
	I0819 13:28:21.336793 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.336802 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:28:21.336815 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:28:21.336839 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:28:21.387668 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:28:21.387711 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:28:21.402131 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:28:21.402173 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:28:21.473587 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:28:21.473615 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:28:21.473632 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:28:21.576567 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:28:21.576610 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 13:28:21.614859 2152780 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 13:28:21.614919 2152780 out.go:270] * 
	W0819 13:28:21.615011 2152780 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.615032 2152780 out.go:270] * 
	W0819 13:28:21.615837 2152780 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:28:21.619494 2152780 out.go:201] 
	W0819 13:28:21.620743 2152780 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.620793 2152780 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 13:28:21.620813 2152780 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 13:28:21.622143 2152780 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.644920417Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074830644898238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cae0fce7-cb88-4665-a67b-9d1ed727bdd6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.646460188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cffa14e8-43c2-4952-b291-d28b3a0d6679 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.646540252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cffa14e8-43c2-4952-b291-d28b3a0d6679 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.646805810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e17eb18e8b9aeb57dcebc5d16ad66bc39719c176421926f9db9c613758b6713e,PodSandboxId:ff68b19fa6ad66156e584aff98d252301ae07c98e3698a0fe8a2df77384bd2d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073977016804631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mvn98,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34ae24b-14c2-4b5a-8936-fd1785e1e26f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9192ed2c605782013ccb50760b7ce2ea0f7e1f383127ffc1e344d47164ab080b,PodSandboxId:ae2af585e8ee52fc20c26364596550ea6107cd908de719ec7a2574655167b162,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073976975187054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p6zhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 74e6b340-a400-4672-a8c2-3e3ffe39019f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d764ddaa462f1d9020775d70fc5bc25f09ab09d3cf970fc1629194186cfd7b8,PodSandboxId:3cfeee60ad41511864ea0110dfb2207112953e5bc6796e59508a75bf6a6ba607,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1724073976795083767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ff9d6-b00d-4e31-8f05-60454adbcf44,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b6c7c210803766c2b2e530eb1bbec4ddb5384f6270f6ad7dcb0c182604e806,PodSandboxId:023cc108d67fa33c3fdc7162de36f3063275f942163fe8af9c0a8e60d18d0739,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724073975632087585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9jhss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de8a2831-619c-4651-bb20-b077271795e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b4e2ac023acb57b647db26868a7ae3f222214bff0091ee55abbce19f2cc80a,PodSandboxId:e8e4fb056925aed31308d762e7bc9f5f0ecc305988f216ed457ab30a425f2969,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073964533905644,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf8d24aca618195031c14e2b36a221e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68a14670fea88faa8a82674c3840398c98df06f827790d16ad926599f32c72f4,PodSandboxId:4ee4ace14f1d9b101714d7f5e42341e9fef619c61ca6364aa0838f9e9f79fab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073964513765611,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338e5c7cfc6fc69155ee118d0b3e730f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fc542904b35b2fc6294b3fff14a1c5490551eeadbda4da4e44ceccff41fc36,PodSandboxId:a4ba0bbd95bd890f50f33598ac388c2584ee2219e843b8143053a55ebba99aea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073964540190542,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bd63b37df0ef774f6078fbfa346463,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17b2b27284087c02c9046725e1a9e5b80e950f41aeedaa7abd16eba4c8382c,PodSandboxId:47cbb5662d6177d7b2483ba1ec7dca65015f0eb589f2dc1e8d689eaf152ce2f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073964472863183,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a5df38b0f875df0cf987689dc71f475e15b26a5616a33c181dbeb6328f0f5d7,PodSandboxId:5484cc94ffcba8ac891d9c148047ba3453842837d6a79beb2f917f91967b6f4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073675615273001,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cffa14e8-43c2-4952-b291-d28b3a0d6679 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.685595534Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0436f387-70fb-43e9-9542-25eab23ddcd2 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.685662236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0436f387-70fb-43e9-9542-25eab23ddcd2 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.688133996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0fd64c67-6cb6-4395-9622-67981dbb0fbe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.688551271Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074830688529392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fd64c67-6cb6-4395-9622-67981dbb0fbe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.689818841Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=edcefc54-9e38-4c62-b336-a992638dd300 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.689872546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=edcefc54-9e38-4c62-b336-a992638dd300 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.690092480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e17eb18e8b9aeb57dcebc5d16ad66bc39719c176421926f9db9c613758b6713e,PodSandboxId:ff68b19fa6ad66156e584aff98d252301ae07c98e3698a0fe8a2df77384bd2d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073977016804631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mvn98,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34ae24b-14c2-4b5a-8936-fd1785e1e26f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9192ed2c605782013ccb50760b7ce2ea0f7e1f383127ffc1e344d47164ab080b,PodSandboxId:ae2af585e8ee52fc20c26364596550ea6107cd908de719ec7a2574655167b162,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073976975187054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p6zhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 74e6b340-a400-4672-a8c2-3e3ffe39019f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d764ddaa462f1d9020775d70fc5bc25f09ab09d3cf970fc1629194186cfd7b8,PodSandboxId:3cfeee60ad41511864ea0110dfb2207112953e5bc6796e59508a75bf6a6ba607,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1724073976795083767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ff9d6-b00d-4e31-8f05-60454adbcf44,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b6c7c210803766c2b2e530eb1bbec4ddb5384f6270f6ad7dcb0c182604e806,PodSandboxId:023cc108d67fa33c3fdc7162de36f3063275f942163fe8af9c0a8e60d18d0739,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724073975632087585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9jhss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de8a2831-619c-4651-bb20-b077271795e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b4e2ac023acb57b647db26868a7ae3f222214bff0091ee55abbce19f2cc80a,PodSandboxId:e8e4fb056925aed31308d762e7bc9f5f0ecc305988f216ed457ab30a425f2969,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073964533905644,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf8d24aca618195031c14e2b36a221e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68a14670fea88faa8a82674c3840398c98df06f827790d16ad926599f32c72f4,PodSandboxId:4ee4ace14f1d9b101714d7f5e42341e9fef619c61ca6364aa0838f9e9f79fab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073964513765611,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338e5c7cfc6fc69155ee118d0b3e730f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fc542904b35b2fc6294b3fff14a1c5490551eeadbda4da4e44ceccff41fc36,PodSandboxId:a4ba0bbd95bd890f50f33598ac388c2584ee2219e843b8143053a55ebba99aea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073964540190542,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bd63b37df0ef774f6078fbfa346463,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17b2b27284087c02c9046725e1a9e5b80e950f41aeedaa7abd16eba4c8382c,PodSandboxId:47cbb5662d6177d7b2483ba1ec7dca65015f0eb589f2dc1e8d689eaf152ce2f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073964472863183,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a5df38b0f875df0cf987689dc71f475e15b26a5616a33c181dbeb6328f0f5d7,PodSandboxId:5484cc94ffcba8ac891d9c148047ba3453842837d6a79beb2f917f91967b6f4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073675615273001,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=edcefc54-9e38-4c62-b336-a992638dd300 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.734998038Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8cfda30-8b5c-47fe-9bff-fd489ac2cce0 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.735072734Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8cfda30-8b5c-47fe-9bff-fd489ac2cce0 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.736253074Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=632d3820-3bf0-4b57-af9f-aa04ab52fdef name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.736701985Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074830736677396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=632d3820-3bf0-4b57-af9f-aa04ab52fdef name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.737575934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c35d30d-68fb-40a2-9920-21ea99f75b97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.737653635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c35d30d-68fb-40a2-9920-21ea99f75b97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.738003817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e17eb18e8b9aeb57dcebc5d16ad66bc39719c176421926f9db9c613758b6713e,PodSandboxId:ff68b19fa6ad66156e584aff98d252301ae07c98e3698a0fe8a2df77384bd2d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073977016804631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mvn98,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34ae24b-14c2-4b5a-8936-fd1785e1e26f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9192ed2c605782013ccb50760b7ce2ea0f7e1f383127ffc1e344d47164ab080b,PodSandboxId:ae2af585e8ee52fc20c26364596550ea6107cd908de719ec7a2574655167b162,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073976975187054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p6zhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 74e6b340-a400-4672-a8c2-3e3ffe39019f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d764ddaa462f1d9020775d70fc5bc25f09ab09d3cf970fc1629194186cfd7b8,PodSandboxId:3cfeee60ad41511864ea0110dfb2207112953e5bc6796e59508a75bf6a6ba607,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1724073976795083767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ff9d6-b00d-4e31-8f05-60454adbcf44,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b6c7c210803766c2b2e530eb1bbec4ddb5384f6270f6ad7dcb0c182604e806,PodSandboxId:023cc108d67fa33c3fdc7162de36f3063275f942163fe8af9c0a8e60d18d0739,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724073975632087585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9jhss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de8a2831-619c-4651-bb20-b077271795e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b4e2ac023acb57b647db26868a7ae3f222214bff0091ee55abbce19f2cc80a,PodSandboxId:e8e4fb056925aed31308d762e7bc9f5f0ecc305988f216ed457ab30a425f2969,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073964533905644,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf8d24aca618195031c14e2b36a221e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68a14670fea88faa8a82674c3840398c98df06f827790d16ad926599f32c72f4,PodSandboxId:4ee4ace14f1d9b101714d7f5e42341e9fef619c61ca6364aa0838f9e9f79fab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073964513765611,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338e5c7cfc6fc69155ee118d0b3e730f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fc542904b35b2fc6294b3fff14a1c5490551eeadbda4da4e44ceccff41fc36,PodSandboxId:a4ba0bbd95bd890f50f33598ac388c2584ee2219e843b8143053a55ebba99aea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073964540190542,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bd63b37df0ef774f6078fbfa346463,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17b2b27284087c02c9046725e1a9e5b80e950f41aeedaa7abd16eba4c8382c,PodSandboxId:47cbb5662d6177d7b2483ba1ec7dca65015f0eb589f2dc1e8d689eaf152ce2f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073964472863183,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a5df38b0f875df0cf987689dc71f475e15b26a5616a33c181dbeb6328f0f5d7,PodSandboxId:5484cc94ffcba8ac891d9c148047ba3453842837d6a79beb2f917f91967b6f4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073675615273001,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c35d30d-68fb-40a2-9920-21ea99f75b97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.770856705Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8437aae-9cb2-4750-8477-7e5c861c68f9 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.770939719Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8437aae-9cb2-4750-8477-7e5c861c68f9 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.772320283Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b0f2ea2-6556-4568-8d90-53e9c98f2036 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.772707400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074830772686565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b0f2ea2-6556-4568-8d90-53e9c98f2036 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.773215287Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9609733f-85ab-43d4-8337-840454f3ac8a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.773291078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9609733f-85ab-43d4-8337-840454f3ac8a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:30 no-preload-441627 crio[722]: time="2024-08-19 13:40:30.773521771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e17eb18e8b9aeb57dcebc5d16ad66bc39719c176421926f9db9c613758b6713e,PodSandboxId:ff68b19fa6ad66156e584aff98d252301ae07c98e3698a0fe8a2df77384bd2d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073977016804631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mvn98,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34ae24b-14c2-4b5a-8936-fd1785e1e26f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9192ed2c605782013ccb50760b7ce2ea0f7e1f383127ffc1e344d47164ab080b,PodSandboxId:ae2af585e8ee52fc20c26364596550ea6107cd908de719ec7a2574655167b162,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724073976975187054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p6zhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 74e6b340-a400-4672-a8c2-3e3ffe39019f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d764ddaa462f1d9020775d70fc5bc25f09ab09d3cf970fc1629194186cfd7b8,PodSandboxId:3cfeee60ad41511864ea0110dfb2207112953e5bc6796e59508a75bf6a6ba607,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1724073976795083767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ff9d6-b00d-4e31-8f05-60454adbcf44,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b6c7c210803766c2b2e530eb1bbec4ddb5384f6270f6ad7dcb0c182604e806,PodSandboxId:023cc108d67fa33c3fdc7162de36f3063275f942163fe8af9c0a8e60d18d0739,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724073975632087585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9jhss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de8a2831-619c-4651-bb20-b077271795e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b4e2ac023acb57b647db26868a7ae3f222214bff0091ee55abbce19f2cc80a,PodSandboxId:e8e4fb056925aed31308d762e7bc9f5f0ecc305988f216ed457ab30a425f2969,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724073964533905644,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf8d24aca618195031c14e2b36a221e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68a14670fea88faa8a82674c3840398c98df06f827790d16ad926599f32c72f4,PodSandboxId:4ee4ace14f1d9b101714d7f5e42341e9fef619c61ca6364aa0838f9e9f79fab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724073964513765611,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338e5c7cfc6fc69155ee118d0b3e730f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fc542904b35b2fc6294b3fff14a1c5490551eeadbda4da4e44ceccff41fc36,PodSandboxId:a4ba0bbd95bd890f50f33598ac388c2584ee2219e843b8143053a55ebba99aea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724073964540190542,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bd63b37df0ef774f6078fbfa346463,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17b2b27284087c02c9046725e1a9e5b80e950f41aeedaa7abd16eba4c8382c,PodSandboxId:47cbb5662d6177d7b2483ba1ec7dca65015f0eb589f2dc1e8d689eaf152ce2f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724073964472863183,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a5df38b0f875df0cf987689dc71f475e15b26a5616a33c181dbeb6328f0f5d7,PodSandboxId:5484cc94ffcba8ac891d9c148047ba3453842837d6a79beb2f917f91967b6f4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724073675615273001,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-441627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0601408a2098e375a2f0f74988fd88,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9609733f-85ab-43d4-8337-840454f3ac8a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e17eb18e8b9ae       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   ff68b19fa6ad6       coredns-6f6b679f8f-mvn98
	9192ed2c60578       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   ae2af585e8ee5       coredns-6f6b679f8f-p6zhv
	5d764ddaa462f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   3cfeee60ad415       storage-provisioner
	32b6c7c210803       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   14 minutes ago      Running             kube-proxy                0                   023cc108d67fa       kube-proxy-9jhss
	a3fc542904b35       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   14 minutes ago      Running             kube-controller-manager   2                   a4ba0bbd95bd8       kube-controller-manager-no-preload-441627
	b2b4e2ac023ac       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   14 minutes ago      Running             kube-scheduler            2                   e8e4fb056925a       kube-scheduler-no-preload-441627
	68a14670fea88       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   4ee4ace14f1d9       etcd-no-preload-441627
	9d17b2b272840       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Running             kube-apiserver            2                   47cbb5662d617       kube-apiserver-no-preload-441627
	5a5df38b0f875       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   19 minutes ago      Exited              kube-apiserver            1                   5484cc94ffcba       kube-apiserver-no-preload-441627
	
	
	==> coredns [9192ed2c605782013ccb50760b7ce2ea0f7e1f383127ffc1e344d47164ab080b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e17eb18e8b9aeb57dcebc5d16ad66bc39719c176421926f9db9c613758b6713e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-441627
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-441627
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=no-preload-441627
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T13_26_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 13:26:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-441627
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 13:40:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 13:36:31 +0000   Mon, 19 Aug 2024 13:26:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 13:36:31 +0000   Mon, 19 Aug 2024 13:26:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 13:36:31 +0000   Mon, 19 Aug 2024 13:26:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 13:36:31 +0000   Mon, 19 Aug 2024 13:26:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.161
	  Hostname:    no-preload-441627
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb2ad5a0a588440892626724dbf0c23e
	  System UUID:                cb2ad5a0-a588-4408-9262-6724dbf0c23e
	  Boot ID:                    ac99fccb-06be-4148-8d22-2e0b842e0c48
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-mvn98                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-p6zhv                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-441627                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-441627             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-441627    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-9jhss                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-441627             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-rzp96              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-441627 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-441627 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-441627 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-441627 event: Registered Node no-preload-441627 in Controller
	
	
	==> dmesg <==
	[  +0.043989] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.045839] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.590373] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.371695] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.381353] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.072261] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064530] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.201399] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.124738] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.281680] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[Aug19 13:21] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.063486] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.149136] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +3.276619] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.121924] kauditd_printk_skb: 55 callbacks suppressed
	[ +28.136085] kauditd_printk_skb: 30 callbacks suppressed
	[Aug19 13:26] systemd-fstab-generator[3036]: Ignoring "noauto" option for root device
	[  +0.069449] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.486632] systemd-fstab-generator[3358]: Ignoring "noauto" option for root device
	[  +0.081829] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.303165] systemd-fstab-generator[3471]: Ignoring "noauto" option for root device
	[  +0.123314] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.693247] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [68a14670fea88faa8a82674c3840398c98df06f827790d16ad926599f32c72f4] <==
	{"level":"info","ts":"2024-08-19T13:26:05.112543Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T13:26:05.112234Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9aa7cd058091608f","local-member-id":"3fbdf04b5b0eb504","added-peer-id":"3fbdf04b5b0eb504","added-peer-peer-urls":["https://192.168.50.161:2380"]}
	{"level":"info","ts":"2024-08-19T13:26:05.952582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-19T13:26:05.952641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-19T13:26:05.952669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 received MsgPreVoteResp from 3fbdf04b5b0eb504 at term 1"}
	{"level":"info","ts":"2024-08-19T13:26:05.952685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 became candidate at term 2"}
	{"level":"info","ts":"2024-08-19T13:26:05.952691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 received MsgVoteResp from 3fbdf04b5b0eb504 at term 2"}
	{"level":"info","ts":"2024-08-19T13:26:05.952700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 became leader at term 2"}
	{"level":"info","ts":"2024-08-19T13:26:05.952709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3fbdf04b5b0eb504 elected leader 3fbdf04b5b0eb504 at term 2"}
	{"level":"info","ts":"2024-08-19T13:26:05.954058Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3fbdf04b5b0eb504","local-member-attributes":"{Name:no-preload-441627 ClientURLs:[https://192.168.50.161:2379]}","request-path":"/0/members/3fbdf04b5b0eb504/attributes","cluster-id":"9aa7cd058091608f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T13:26:05.954113Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:26:05.954437Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:26:05.954502Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:26:05.955183Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:26:05.955654Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T13:26:05.955672Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T13:26:05.955705Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9aa7cd058091608f","local-member-id":"3fbdf04b5b0eb504","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:26:05.955753Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:26:05.955775Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:26:05.956118Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.161:2379"}
	{"level":"info","ts":"2024-08-19T13:26:05.956414Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:26:05.957213Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T13:36:05.982703Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":682}
	{"level":"info","ts":"2024-08-19T13:36:05.991711Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":682,"took":"8.634825ms","hash":3140312107,"current-db-size-bytes":2265088,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2265088,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-08-19T13:36:05.991806Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3140312107,"revision":682,"compact-revision":-1}
	
	
	==> kernel <==
	 13:40:31 up 19 min,  0 users,  load average: 0.06, 0.12, 0.09
	Linux no-preload-441627 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5a5df38b0f875df0cf987689dc71f475e15b26a5616a33c181dbeb6328f0f5d7] <==
	W0819 13:25:56.238675       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:56.267659       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:56.313757       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:56.315162       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:56.506769       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:56.534173       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:56.573409       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:25:56.631019       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:00.197975       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:00.326118       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:00.467529       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:00.524631       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:00.664786       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:00.990848       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.062188       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.121321       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.298322       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.317238       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.414658       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.419108       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.451023       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.475444       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.495936       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.529672       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 13:26:01.576700       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9d17b2b27284087c02c9046725e1a9e5b80e950f41aeedaa7abd16eba4c8382c] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 13:36:08.328602       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:36:08.328776       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 13:36:08.329977       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:36:08.330082       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 13:37:08.330510       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:37:08.330854       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 13:37:08.330517       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:37:08.330983       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 13:37:08.332202       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:37:08.332272       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 13:39:08.332600       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:39:08.332961       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 13:39:08.332609       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 13:39:08.333174       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 13:39:08.334280       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:39:08.334319       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a3fc542904b35b2fc6294b3fff14a1c5490551eeadbda4da4e44ceccff41fc36] <==
	E0819 13:35:14.389255       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:35:14.879535       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:35:44.395543       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:35:44.888300       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:36:14.402328       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:36:14.896290       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 13:36:31.368245       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-441627"
	E0819 13:36:44.409751       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:36:44.905282       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:37:14.416258       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:37:14.918558       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 13:37:20.067770       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="139.79µs"
	I0819 13:37:33.063712       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="88.753µs"
	E0819 13:37:44.422893       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:37:44.926911       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:38:14.429323       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:38:14.935486       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:38:44.436155       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:38:44.943996       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:39:14.443502       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:39:14.951235       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:39:44.450420       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:39:44.959217       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 13:40:14.459848       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 13:40:14.968042       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [32b6c7c210803766c2b2e530eb1bbec4ddb5384f6270f6ad7dcb0c182604e806] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 13:26:15.998926       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 13:26:16.024086       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.161"]
	E0819 13:26:16.024386       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 13:26:16.216927       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 13:26:16.217011       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 13:26:16.217100       1 server_linux.go:169] "Using iptables Proxier"
	I0819 13:26:16.223019       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 13:26:16.223291       1 server.go:483] "Version info" version="v1.31.0"
	I0819 13:26:16.223600       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:26:16.227207       1 config.go:197] "Starting service config controller"
	I0819 13:26:16.227278       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 13:26:16.227320       1 config.go:104] "Starting endpoint slice config controller"
	I0819 13:26:16.227430       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 13:26:16.229093       1 config.go:326] "Starting node config controller"
	I0819 13:26:16.229137       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 13:26:16.327921       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 13:26:16.328027       1 shared_informer.go:320] Caches are synced for service config
	I0819 13:26:16.329524       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b2b4e2ac023acb57b647db26868a7ae3f222214bff0091ee55abbce19f2cc80a] <==
	W0819 13:26:07.370083       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 13:26:07.370112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:07.370182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 13:26:07.370211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:07.370184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:26:07.370257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:07.370305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 13:26:07.370383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:07.370507       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 13:26:07.370555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:08.223983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 13:26:08.224116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:08.226560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:26:08.226631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:08.418599       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 13:26:08.418697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:08.451501       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 13:26:08.451553       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:08.457262       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 13:26:08.457411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:08.614175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 13:26:08.614287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:26:08.907452       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 13:26:08.907547       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 13:26:12.063149       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 13:39:21 no-preload-441627 kubelet[3365]: E0819 13:39:21.050184    3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rzp96" podUID="de104a11-2674-4f73-875c-33c2a899a89d"
	Aug 19 13:39:30 no-preload-441627 kubelet[3365]: E0819 13:39:30.292129    3365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074770291758932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:39:30 no-preload-441627 kubelet[3365]: E0819 13:39:30.292577    3365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074770291758932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:39:32 no-preload-441627 kubelet[3365]: E0819 13:39:32.048402    3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rzp96" podUID="de104a11-2674-4f73-875c-33c2a899a89d"
	Aug 19 13:39:40 no-preload-441627 kubelet[3365]: E0819 13:39:40.294155    3365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074780293871329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:39:40 no-preload-441627 kubelet[3365]: E0819 13:39:40.294246    3365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074780293871329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:39:47 no-preload-441627 kubelet[3365]: E0819 13:39:47.049003    3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rzp96" podUID="de104a11-2674-4f73-875c-33c2a899a89d"
	Aug 19 13:39:50 no-preload-441627 kubelet[3365]: E0819 13:39:50.297980    3365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074790297631408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:39:50 no-preload-441627 kubelet[3365]: E0819 13:39:50.298068    3365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074790297631408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:39:59 no-preload-441627 kubelet[3365]: E0819 13:39:59.047960    3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rzp96" podUID="de104a11-2674-4f73-875c-33c2a899a89d"
	Aug 19 13:40:00 no-preload-441627 kubelet[3365]: E0819 13:40:00.299273    3365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074800299039429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:40:00 no-preload-441627 kubelet[3365]: E0819 13:40:00.299322    3365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074800299039429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:40:10 no-preload-441627 kubelet[3365]: E0819 13:40:10.050900    3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rzp96" podUID="de104a11-2674-4f73-875c-33c2a899a89d"
	Aug 19 13:40:10 no-preload-441627 kubelet[3365]: E0819 13:40:10.071423    3365 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 13:40:10 no-preload-441627 kubelet[3365]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 13:40:10 no-preload-441627 kubelet[3365]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 13:40:10 no-preload-441627 kubelet[3365]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 13:40:10 no-preload-441627 kubelet[3365]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 13:40:10 no-preload-441627 kubelet[3365]: E0819 13:40:10.301555    3365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074810301094003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:40:10 no-preload-441627 kubelet[3365]: E0819 13:40:10.301681    3365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074810301094003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:40:20 no-preload-441627 kubelet[3365]: E0819 13:40:20.303942    3365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074820303429739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:40:20 no-preload-441627 kubelet[3365]: E0819 13:40:20.303986    3365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074820303429739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:40:21 no-preload-441627 kubelet[3365]: E0819 13:40:21.048910    3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rzp96" podUID="de104a11-2674-4f73-875c-33c2a899a89d"
	Aug 19 13:40:30 no-preload-441627 kubelet[3365]: E0819 13:40:30.306394    3365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074830305795138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 13:40:30 no-preload-441627 kubelet[3365]: E0819 13:40:30.306477    3365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074830305795138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [5d764ddaa462f1d9020775d70fc5bc25f09ab09d3cf970fc1629194186cfd7b8] <==
	I0819 13:26:17.157205       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 13:26:17.216589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 13:26:17.220587       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 13:26:17.271477       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 13:26:17.271680       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-441627_25b14b2b-ba62-4368-b0a1-03d58b36820f!
	I0819 13:26:17.287504       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"93d6ee90-5ad2-49a8-92a3-1cdc99398092", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-441627_25b14b2b-ba62-4368-b0a1-03d58b36820f became leader
	I0819 13:26:17.372318       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-441627_25b14b2b-ba62-4368-b0a1-03d58b36820f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-441627 -n no-preload-441627
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-441627 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-rzp96
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-441627 describe pod metrics-server-6867b74b74-rzp96
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-441627 describe pod metrics-server-6867b74b74-rzp96: exit status 1 (64.109586ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-rzp96" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-441627 describe pod metrics-server-6867b74b74-rzp96: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (303.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (165.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:37:44.026206 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:37:51.877771 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:39:02.189513 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:39:27.644476 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:39:29.741212 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:39:40.836801 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
E0819 13:39:58.878941 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.238:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.238:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-620467 -n old-k8s-version-620467
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-620467 -n old-k8s-version-620467: exit status 2 (236.192673ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-620467" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-620467 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-620467 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (3.144µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-620467 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467: exit status 2 (222.740217ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-620467 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-620467 logs -n 25: (1.604364221s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-093052             | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-093052                  | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-093052 --memory=2200 --alsologtostderr   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:12 UTC | 19 Aug 24 13:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-038572  | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC |                     |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-093052 image list                           | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| delete  | -p newest-cni-093052                                   | newest-cni-093052            | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:13 UTC |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:13 UTC | 19 Aug 24 13:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-441627                  | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-441627                                   | no-preload-441627            | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-620467        | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-823470            | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-038572       | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-038572 | jenkins | v1.33.1 | 19 Aug 24 13:15 UTC | 19 Aug 24 13:24 UTC |
	|         | default-k8s-diff-port-038572                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-620467             | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC | 19 Aug 24 13:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-620467                              | old-k8s-version-620467       | jenkins | v1.33.1 | 19 Aug 24 13:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-823470                 | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-823470                                  | embed-certs-823470           | jenkins | v1.33.1 | 19 Aug 24 13:17 UTC | 19 Aug 24 13:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 13:17:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 13:17:40.922225 2153523 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:17:40.922462 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922470 2153523 out.go:358] Setting ErrFile to fd 2...
	I0819 13:17:40.922475 2153523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:17:40.922631 2153523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:17:40.923154 2153523 out.go:352] Setting JSON to false
	I0819 13:17:40.924134 2153523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":165611,"bootTime":1723907850,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:17:40.924233 2153523 start.go:139] virtualization: kvm guest
	I0819 13:17:40.927254 2153523 out.go:177] * [embed-certs-823470] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:17:40.928415 2153523 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:17:40.928457 2153523 notify.go:220] Checking for updates...
	I0819 13:17:40.930421 2153523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:17:40.931440 2153523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:17:40.932454 2153523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:17:40.933815 2153523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:17:40.935189 2153523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:17:40.936761 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:17:40.937143 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.937209 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.952461 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0819 13:17:40.952868 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.953423 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.953455 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.953809 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.954009 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:40.954280 2153523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:17:40.954572 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:17:40.954610 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:17:40.970308 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0819 13:17:40.970901 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:17:40.971446 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:17:40.971478 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:17:40.971798 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:17:40.972002 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:17:41.006606 2153523 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 13:17:41.007689 2153523 start.go:297] selected driver: kvm2
	I0819 13:17:41.007712 2153523 start.go:901] validating driver "kvm2" against &{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.007878 2153523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:17:41.008964 2153523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.009068 2153523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 13:17:41.024433 2153523 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 13:17:41.024970 2153523 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:17:41.025027 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:17:41.025044 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:17:41.025112 2153523 start.go:340] cluster config:
	{Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:17:41.025259 2153523 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:17:41.027074 2153523 out.go:177] * Starting "embed-certs-823470" primary control-plane node in "embed-certs-823470" cluster
	I0819 13:17:37.004434 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:41.028219 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:17:41.028273 2153523 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 13:17:41.028296 2153523 cache.go:56] Caching tarball of preloaded images
	I0819 13:17:41.028407 2153523 preload.go:172] Found /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 13:17:41.028424 2153523 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 13:17:41.028580 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:17:41.028861 2153523 start.go:360] acquireMachinesLock for embed-certs-823470: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:17:43.084440 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:46.156450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:52.236466 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:17:55.308454 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:01.388472 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:04.460482 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:10.540493 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:13.612449 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:19.692412 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:22.764436 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:28.844443 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:31.916476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:37.996470 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:41.068499 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:47.148488 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:50.220507 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:56.300450 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:18:59.372511 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:05.452504 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:08.524521 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:14.604456 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:17.676406 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:23.756476 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:26.828515 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:32.908445 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:35.980361 2151869 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0819 13:19:38.985083 2152365 start.go:364] duration metric: took 4m0.580875322s to acquireMachinesLock for "default-k8s-diff-port-038572"
	I0819 13:19:38.985152 2152365 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:38.985171 2152365 fix.go:54] fixHost starting: 
	I0819 13:19:38.985542 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:38.985578 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:39.001015 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I0819 13:19:39.001515 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:39.002024 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:19:39.002048 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:39.002379 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:39.002572 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:39.002734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:19:39.004320 2152365 fix.go:112] recreateIfNeeded on default-k8s-diff-port-038572: state=Stopped err=<nil>
	I0819 13:19:39.004361 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	W0819 13:19:39.004545 2152365 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:39.006073 2152365 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-038572" ...
	I0819 13:19:38.982266 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:38.982325 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982657 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:19:38.982688 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:19:38.982924 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:19:38.984929 2151869 machine.go:96] duration metric: took 4m37.397817873s to provisionDockerMachine
	I0819 13:19:38.984974 2151869 fix.go:56] duration metric: took 4m37.421262151s for fixHost
	I0819 13:19:38.984981 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 4m37.421326953s
	W0819 13:19:38.985006 2151869 start.go:714] error starting host: provision: host is not running
	W0819 13:19:38.985119 2151869 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 13:19:38.985132 2151869 start.go:729] Will try again in 5 seconds ...
	I0819 13:19:39.007314 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Start
	I0819 13:19:39.007497 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring networks are active...
	I0819 13:19:39.008358 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network default is active
	I0819 13:19:39.008748 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Ensuring network mk-default-k8s-diff-port-038572 is active
	I0819 13:19:39.009091 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Getting domain xml...
	I0819 13:19:39.009732 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Creating domain...
	I0819 13:19:40.228941 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting to get IP...
	I0819 13:19:40.229930 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230312 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.230369 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.230279 2153980 retry.go:31] will retry after 222.94238ms: waiting for machine to come up
	I0819 13:19:40.455010 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.455640 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.455562 2153980 retry.go:31] will retry after 314.241366ms: waiting for machine to come up
	I0819 13:19:40.771105 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771564 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:40.771593 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:40.771511 2153980 retry.go:31] will retry after 475.191571ms: waiting for machine to come up
	I0819 13:19:41.248037 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248436 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.248462 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.248377 2153980 retry.go:31] will retry after 547.125638ms: waiting for machine to come up
	I0819 13:19:41.797263 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797702 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:41.797734 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:41.797653 2153980 retry.go:31] will retry after 581.6163ms: waiting for machine to come up
	I0819 13:19:42.380743 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:42.381257 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:42.381195 2153980 retry.go:31] will retry after 661.080111ms: waiting for machine to come up
	I0819 13:19:43.044143 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044615 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.044645 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.044566 2153980 retry.go:31] will retry after 815.359371ms: waiting for machine to come up
	I0819 13:19:43.986791 2151869 start.go:360] acquireMachinesLock for no-preload-441627: {Name:mk47fa737025b7faea9c65348904d9195bf44df0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 13:19:43.861792 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862227 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:43.862277 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:43.862175 2153980 retry.go:31] will retry after 1.076851262s: waiting for machine to come up
	I0819 13:19:44.940944 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:44.941544 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:44.941431 2153980 retry.go:31] will retry after 1.492394345s: waiting for machine to come up
	I0819 13:19:46.436200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:46.436720 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:46.436617 2153980 retry.go:31] will retry after 1.567728586s: waiting for machine to come up
	I0819 13:19:48.006390 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:48.006917 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:48.006810 2153980 retry.go:31] will retry after 1.799313771s: waiting for machine to come up
	I0819 13:19:49.807716 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808103 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:49.808135 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:49.808029 2153980 retry.go:31] will retry after 2.910106345s: waiting for machine to come up
	I0819 13:19:52.722235 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722611 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | unable to find current IP address of domain default-k8s-diff-port-038572 in network mk-default-k8s-diff-port-038572
	I0819 13:19:52.722644 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | I0819 13:19:52.722554 2153980 retry.go:31] will retry after 2.846975794s: waiting for machine to come up
	I0819 13:19:56.801408 2152780 start.go:364] duration metric: took 3m10.884201023s to acquireMachinesLock for "old-k8s-version-620467"
	I0819 13:19:56.801488 2152780 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:19:56.801498 2152780 fix.go:54] fixHost starting: 
	I0819 13:19:56.802005 2152780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:19:56.802046 2152780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:19:56.819893 2152780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0819 13:19:56.820323 2152780 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:19:56.820785 2152780 main.go:141] libmachine: Using API Version  1
	I0819 13:19:56.820813 2152780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:19:56.821126 2152780 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:19:56.821314 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:19:56.821452 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetState
	I0819 13:19:56.823015 2152780 fix.go:112] recreateIfNeeded on old-k8s-version-620467: state=Stopped err=<nil>
	I0819 13:19:56.823056 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	W0819 13:19:56.823300 2152780 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:19:56.824822 2152780 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-620467" ...
	I0819 13:19:55.571761 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572317 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Found IP for machine: 192.168.72.69
	I0819 13:19:55.572351 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has current primary IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.572366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserving static IP address...
	I0819 13:19:55.572819 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.572863 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | skip adding static IP to network mk-default-k8s-diff-port-038572 - found existing host DHCP lease matching {name: "default-k8s-diff-port-038572", mac: "52:54:00:42:47:f6", ip: "192.168.72.69"}
	I0819 13:19:55.572879 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Reserved static IP address: 192.168.72.69
	I0819 13:19:55.572897 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Waiting for SSH to be available...
	I0819 13:19:55.572914 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Getting to WaitForSSH function...
	I0819 13:19:55.575208 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575636 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.575662 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.575751 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH client type: external
	I0819 13:19:55.575797 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa (-rw-------)
	I0819 13:19:55.575834 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:19:55.575851 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | About to run SSH command:
	I0819 13:19:55.575895 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | exit 0
	I0819 13:19:55.696082 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | SSH cmd err, output: <nil>: 
	I0819 13:19:55.696501 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetConfigRaw
	I0819 13:19:55.697226 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:55.699788 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700112 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.700146 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.700429 2152365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/config.json ...
	I0819 13:19:55.700717 2152365 machine.go:93] provisionDockerMachine start ...
	I0819 13:19:55.700742 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:55.700975 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.703239 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703521 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.703556 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.703689 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.703887 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704055 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.704200 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.704360 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.704635 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.704651 2152365 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:19:55.804602 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:19:55.804632 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.804922 2152365 buildroot.go:166] provisioning hostname "default-k8s-diff-port-038572"
	I0819 13:19:55.804951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:55.805155 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.807793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.808144 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.808318 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.808510 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808641 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.808750 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.808888 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.809108 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.809123 2152365 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-038572 && echo "default-k8s-diff-port-038572" | sudo tee /etc/hostname
	I0819 13:19:55.922378 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-038572
	
	I0819 13:19:55.922428 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:55.925767 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926131 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:55.926172 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:55.926350 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:55.926563 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926747 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:55.926891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:55.927130 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:55.927403 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:55.927433 2152365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-038572' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-038572/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-038572' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:19:56.033031 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:19:56.033082 2152365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:19:56.033108 2152365 buildroot.go:174] setting up certificates
	I0819 13:19:56.033121 2152365 provision.go:84] configureAuth start
	I0819 13:19:56.033132 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetMachineName
	I0819 13:19:56.033450 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.036371 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.036898 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.036926 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.037063 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.039494 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039814 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.039844 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.039962 2152365 provision.go:143] copyHostCerts
	I0819 13:19:56.040055 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:19:56.040073 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:19:56.040181 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:19:56.040329 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:19:56.040342 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:19:56.040394 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:19:56.040582 2152365 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:19:56.040611 2152365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:19:56.040666 2152365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:19:56.040752 2152365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-038572 san=[127.0.0.1 192.168.72.69 default-k8s-diff-port-038572 localhost minikube]
	I0819 13:19:56.155224 2152365 provision.go:177] copyRemoteCerts
	I0819 13:19:56.155322 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:19:56.155366 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.157977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158360 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.158393 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.158573 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.158780 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.159002 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.159203 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.238700 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:19:56.263555 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0819 13:19:56.288212 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:19:56.311817 2152365 provision.go:87] duration metric: took 278.679594ms to configureAuth
	I0819 13:19:56.311851 2152365 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:19:56.312067 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:19:56.312187 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.315149 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315598 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.315630 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.315801 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.315977 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316115 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.316244 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.316426 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.316653 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.316770 2152365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:19:56.575936 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:19:56.575977 2152365 machine.go:96] duration metric: took 875.241726ms to provisionDockerMachine
	I0819 13:19:56.576012 2152365 start.go:293] postStartSetup for "default-k8s-diff-port-038572" (driver="kvm2")
	I0819 13:19:56.576030 2152365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:19:56.576068 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.576426 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:19:56.576468 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.579017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579420 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.579453 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.579553 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.579726 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.579891 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.580030 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.661017 2152365 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:19:56.665287 2152365 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:19:56.665315 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:19:56.665387 2152365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:19:56.665469 2152365 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:19:56.665559 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:19:56.675115 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:19:56.698728 2152365 start.go:296] duration metric: took 122.697439ms for postStartSetup
	I0819 13:19:56.698771 2152365 fix.go:56] duration metric: took 17.713602497s for fixHost
	I0819 13:19:56.698793 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.701384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701733 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.701766 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.701907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.702116 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702284 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.702444 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.702625 2152365 main.go:141] libmachine: Using SSH client type: native
	I0819 13:19:56.702822 2152365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.69 22 <nil> <nil>}
	I0819 13:19:56.702836 2152365 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:19:56.801168 2152365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073596.772802095
	
	I0819 13:19:56.801210 2152365 fix.go:216] guest clock: 1724073596.772802095
	I0819 13:19:56.801223 2152365 fix.go:229] Guest: 2024-08-19 13:19:56.772802095 +0000 UTC Remote: 2024-08-19 13:19:56.698775345 +0000 UTC m=+258.438276725 (delta=74.02675ms)
	I0819 13:19:56.801278 2152365 fix.go:200] guest clock delta is within tolerance: 74.02675ms
	I0819 13:19:56.801287 2152365 start.go:83] releasing machines lock for "default-k8s-diff-port-038572", held for 17.81615508s
	I0819 13:19:56.801326 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.801669 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:56.804805 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805198 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.805242 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.805377 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806035 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806291 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:19:56.806387 2152365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:19:56.806442 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.806562 2152365 ssh_runner.go:195] Run: cat /version.json
	I0819 13:19:56.806592 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:19:56.809097 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809446 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.809476 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809513 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.809599 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.809776 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.809951 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810028 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:56.810053 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:56.810115 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.810256 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:19:56.810456 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:19:56.810627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:19:56.810777 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:19:56.905573 2152365 ssh_runner.go:195] Run: systemctl --version
	I0819 13:19:56.911395 2152365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:19:57.053460 2152365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:19:57.060012 2152365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:19:57.060091 2152365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:19:57.076651 2152365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:19:57.076678 2152365 start.go:495] detecting cgroup driver to use...
	I0819 13:19:57.076746 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:19:57.095982 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:19:57.109198 2152365 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:19:57.109282 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:19:57.124552 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:19:57.141665 2152365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:19:57.250580 2152365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:19:57.395902 2152365 docker.go:233] disabling docker service ...
	I0819 13:19:57.395986 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:19:57.411232 2152365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:19:57.424263 2152365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:19:57.576953 2152365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:19:57.695946 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:19:57.709501 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:19:57.727815 2152365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:19:57.727889 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.738210 2152365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:19:57.738284 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.748534 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.759949 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.770744 2152365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:19:57.781844 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.792047 2152365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.810916 2152365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:19:57.826741 2152365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:19:57.839687 2152365 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:19:57.839753 2152365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:19:57.855192 2152365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:19:57.865326 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:19:57.991646 2152365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:19:58.136425 2152365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:19:58.136522 2152365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:19:58.141971 2152365 start.go:563] Will wait 60s for crictl version
	I0819 13:19:58.142047 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:19:58.146029 2152365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:19:58.182474 2152365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:19:58.182580 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.212887 2152365 ssh_runner.go:195] Run: crio --version
	I0819 13:19:58.242578 2152365 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:19:58.243711 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetIP
	I0819 13:19:58.246652 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.246985 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:19:58.247031 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:19:58.247262 2152365 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 13:19:58.251445 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:19:58.266177 2152365 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:19:58.266324 2152365 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:19:58.266417 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:19:56.826004 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .Start
	I0819 13:19:56.826161 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring networks are active...
	I0819 13:19:56.826953 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network default is active
	I0819 13:19:56.827328 2152780 main.go:141] libmachine: (old-k8s-version-620467) Ensuring network mk-old-k8s-version-620467 is active
	I0819 13:19:56.827789 2152780 main.go:141] libmachine: (old-k8s-version-620467) Getting domain xml...
	I0819 13:19:56.828485 2152780 main.go:141] libmachine: (old-k8s-version-620467) Creating domain...
	I0819 13:19:58.099058 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting to get IP...
	I0819 13:19:58.099866 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.100256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.100359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.100246 2154107 retry.go:31] will retry after 217.468132ms: waiting for machine to come up
	I0819 13:19:58.319821 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.320335 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.320359 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.320295 2154107 retry.go:31] will retry after 270.947251ms: waiting for machine to come up
	I0819 13:19:58.592897 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.593586 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.593610 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.593526 2154107 retry.go:31] will retry after 389.864812ms: waiting for machine to come up
	I0819 13:19:58.985269 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:58.985722 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:58.985757 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:58.985674 2154107 retry.go:31] will retry after 559.916587ms: waiting for machine to come up
	I0819 13:19:59.547846 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:19:59.548504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:19:59.548538 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:19:59.548446 2154107 retry.go:31] will retry after 759.198751ms: waiting for machine to come up
	I0819 13:20:00.309569 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:00.310093 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:00.310126 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:00.310041 2154107 retry.go:31] will retry after 836.488095ms: waiting for machine to come up
	I0819 13:19:58.306336 2152365 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:19:58.306490 2152365 ssh_runner.go:195] Run: which lz4
	I0819 13:19:58.311147 2152365 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:19:58.315715 2152365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:19:58.315751 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:19:59.751536 2152365 crio.go:462] duration metric: took 1.440444093s to copy over tarball
	I0819 13:19:59.751611 2152365 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:01.909604 2152365 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157959479s)
	I0819 13:20:01.909646 2152365 crio.go:469] duration metric: took 2.158078046s to extract the tarball
	I0819 13:20:01.909667 2152365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:01.954789 2152365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:02.002755 2152365 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:02.002786 2152365 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:02.002805 2152365 kubeadm.go:934] updating node { 192.168.72.69 8444 v1.31.0 crio true true} ...
	I0819 13:20:02.002970 2152365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-038572 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:02.003061 2152365 ssh_runner.go:195] Run: crio config
	I0819 13:20:02.061207 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:02.061238 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:02.061256 2152365 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:02.061295 2152365 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.69 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-038572 NodeName:default-k8s-diff-port-038572 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:02.061496 2152365 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.69
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-038572"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:02.061628 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:02.072573 2152365 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:02.072676 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:02.083114 2152365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0819 13:20:02.100736 2152365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:02.117359 2152365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 13:20:02.135642 2152365 ssh_runner.go:195] Run: grep 192.168.72.69	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:02.139682 2152365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:02.152242 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:02.278949 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:02.296896 2152365 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572 for IP: 192.168.72.69
	I0819 13:20:02.296925 2152365 certs.go:194] generating shared ca certs ...
	I0819 13:20:02.296949 2152365 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:02.297166 2152365 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:02.297230 2152365 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:02.297248 2152365 certs.go:256] generating profile certs ...
	I0819 13:20:02.297375 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/client.key
	I0819 13:20:02.297441 2152365 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key.ca317f39
	I0819 13:20:02.297489 2152365 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key
	I0819 13:20:02.297641 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:02.297692 2152365 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:02.297705 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:02.297730 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:02.297752 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:02.297778 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:02.297821 2152365 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:02.298812 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:02.332660 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:02.376995 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:02.418243 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:02.455354 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 13:20:02.488839 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:02.514418 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:02.540797 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/default-k8s-diff-port-038572/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:20:02.565628 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:02.590617 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:02.614942 2152365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:02.639455 2152365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:02.657597 2152365 ssh_runner.go:195] Run: openssl version
	I0819 13:20:02.663545 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:02.674396 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679061 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.679138 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:02.685136 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:02.696428 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:02.707271 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.711988 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.712052 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:02.717933 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:02.728982 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:02.739866 2152365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744728 2152365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.744799 2152365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:02.750660 2152365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:02.761203 2152365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:02.765996 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:02.772371 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:02.778453 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:02.784645 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:02.790342 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:02.795882 2152365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:02.801355 2152365 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-038572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-038572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:02.801448 2152365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:02.801536 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.839586 2152365 cri.go:89] found id: ""
	I0819 13:20:02.839684 2152365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:02.850336 2152365 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:02.850363 2152365 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:02.850424 2152365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:02.860676 2152365 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:02.861753 2152365 kubeconfig.go:125] found "default-k8s-diff-port-038572" server: "https://192.168.72.69:8444"
	I0819 13:20:02.863979 2152365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:02.875284 2152365 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.69
	I0819 13:20:02.875321 2152365 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:02.875337 2152365 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:02.875398 2152365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:02.914277 2152365 cri.go:89] found id: ""
	I0819 13:20:02.914383 2152365 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:02.931976 2152365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:02.941897 2152365 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:02.941924 2152365 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:02.941994 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 13:20:02.951103 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:02.951179 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:02.960642 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 13:20:02.970204 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:02.970278 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:02.980235 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.989306 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:02.989371 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:02.998821 2152365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 13:20:03.007907 2152365 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:03.007978 2152365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:03.017698 2152365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:03.027048 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:03.158491 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:01.148388 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:01.148845 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:01.148874 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:01.148801 2154107 retry.go:31] will retry after 1.045849266s: waiting for machine to come up
	I0819 13:20:02.196768 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:02.197307 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:02.197341 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:02.197243 2154107 retry.go:31] will retry after 1.09397017s: waiting for machine to come up
	I0819 13:20:03.292733 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:03.293195 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:03.293233 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:03.293141 2154107 retry.go:31] will retry after 1.29393597s: waiting for machine to come up
	I0819 13:20:04.588965 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:04.589429 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:04.589463 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:04.589356 2154107 retry.go:31] will retry after 1.963033352s: waiting for machine to come up
	I0819 13:20:04.318435 2152365 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.159898195s)
	I0819 13:20:04.318475 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.536005 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.600370 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:04.701830 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:04.701954 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.202875 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.702925 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:05.718529 2152365 api_server.go:72] duration metric: took 1.01671668s to wait for apiserver process to appear ...
	I0819 13:20:05.718570 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:05.718598 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.393400 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.393440 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.393459 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.434960 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:08.434992 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:08.719450 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:08.725655 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:08.725689 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.218770 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.224234 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:09.224273 2152365 api_server.go:103] status: https://192.168.72.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:09.718902 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:20:09.727048 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:20:09.733609 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:09.733641 2152365 api_server.go:131] duration metric: took 4.015063389s to wait for apiserver health ...
	I0819 13:20:09.733652 2152365 cni.go:84] Creating CNI manager for ""
	I0819 13:20:09.733660 2152365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:09.735422 2152365 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:06.553692 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:06.554176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:06.554208 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:06.554119 2154107 retry.go:31] will retry after 2.420122298s: waiting for machine to come up
	I0819 13:20:08.975682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:08.976129 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:08.976176 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:08.976073 2154107 retry.go:31] will retry after 2.64333029s: waiting for machine to come up
	I0819 13:20:09.736901 2152365 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:09.747568 2152365 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:09.766156 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:09.777905 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:09.777944 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:09.777953 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:09.777963 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:09.777978 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:09.777990 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:20:09.778006 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:09.778017 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:09.778029 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:20:09.778040 2152365 system_pods.go:74] duration metric: took 11.855811ms to wait for pod list to return data ...
	I0819 13:20:09.778050 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:09.787946 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:09.787991 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:09.788010 2152365 node_conditions.go:105] duration metric: took 9.95462ms to run NodePressure ...
	I0819 13:20:09.788035 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:10.079231 2152365 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083839 2152365 kubeadm.go:739] kubelet initialised
	I0819 13:20:10.083865 2152365 kubeadm.go:740] duration metric: took 4.599976ms waiting for restarted kubelet to initialise ...
	I0819 13:20:10.083876 2152365 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:10.089020 2152365 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.094701 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094738 2152365 pod_ready.go:82] duration metric: took 5.686754ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.094751 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.094760 2152365 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.099257 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099292 2152365 pod_ready.go:82] duration metric: took 4.514562ms for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.099308 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.099317 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.103745 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103778 2152365 pod_ready.go:82] duration metric: took 4.449246ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.103799 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.103811 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.170603 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170636 2152365 pod_ready.go:82] duration metric: took 66.80935ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.170649 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.170655 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.569837 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569869 2152365 pod_ready.go:82] duration metric: took 399.203757ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.569879 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-proxy-wnkk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.569892 2152365 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:10.970515 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970553 2152365 pod_ready.go:82] duration metric: took 400.651091ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:10.970571 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:10.970600 2152365 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:11.371258 2152365 pod_ready.go:98] node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371306 2152365 pod_ready.go:82] duration metric: took 400.692872ms for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:11.371324 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-038572" hosting pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:11.371335 2152365 pod_ready.go:39] duration metric: took 1.287441542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:11.371365 2152365 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:20:11.387535 2152365 ops.go:34] apiserver oom_adj: -16
	I0819 13:20:11.387559 2152365 kubeadm.go:597] duration metric: took 8.537187666s to restartPrimaryControlPlane
	I0819 13:20:11.387568 2152365 kubeadm.go:394] duration metric: took 8.586226152s to StartCluster
	I0819 13:20:11.387585 2152365 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.387664 2152365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:11.389218 2152365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:11.389474 2152365 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.69 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:20:11.389580 2152365 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:20:11.389706 2152365 config.go:182] Loaded profile config "default-k8s-diff-port-038572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:11.389711 2152365 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389755 2152365 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-038572"
	I0819 13:20:11.389764 2152365 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-038572"
	W0819 13:20:11.389767 2152365 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:20:11.389751 2152365 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-038572"
	I0819 13:20:11.389804 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.389806 2152365 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.389818 2152365 addons.go:243] addon metrics-server should already be in state true
	I0819 13:20:11.389827 2152365 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-038572"
	I0819 13:20:11.389849 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.390217 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390246 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390254 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390283 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.390305 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.390341 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.391417 2152365 out.go:177] * Verifying Kubernetes components...
	I0819 13:20:11.392645 2152365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:11.406609 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0819 13:20:11.407081 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.407694 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.407723 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.408143 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.408421 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.411206 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0819 13:20:11.411468 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0819 13:20:11.411740 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.411900 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.412296 2152365 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-038572"
	W0819 13:20:11.412319 2152365 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:20:11.412335 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412351 2152365 host.go:66] Checking if "default-k8s-diff-port-038572" exists ...
	I0819 13:20:11.412393 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412482 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.412508 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.412744 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.412758 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.412793 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.412794 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.413326 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413350 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.413374 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.413399 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.429280 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40269
	I0819 13:20:11.429783 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.430330 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.430354 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.430797 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.430981 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.433018 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.433214 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I0819 13:20:11.433651 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.433708 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0819 13:20:11.434138 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434152 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434199 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.434455 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.434636 2152365 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:11.434806 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.434830 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.434988 2152365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:11.435019 2152365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:11.435164 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.435375 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.435924 2152365 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.435944 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:20:11.435965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.437309 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.438592 2152365 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:20:11.439482 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.439584 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:20:11.439598 2152365 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:20:11.439617 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.439905 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.439943 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.440077 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.440243 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.440399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.440517 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.442627 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.442965 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.442990 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.443134 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.443274 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.443500 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.443627 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.457318 2152365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0819 13:20:11.457870 2152365 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:11.458444 2152365 main.go:141] libmachine: Using API Version  1
	I0819 13:20:11.458481 2152365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:11.458854 2152365 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:11.459017 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetState
	I0819 13:20:11.460870 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .DriverName
	I0819 13:20:11.461105 2152365 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.461121 2152365 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:20:11.461138 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHHostname
	I0819 13:20:11.463902 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464384 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:47:f6", ip: ""} in network mk-default-k8s-diff-port-038572: {Iface:virbr4 ExpiryTime:2024-08-19 14:19:50 +0000 UTC Type:0 Mac:52:54:00:42:47:f6 Iaid: IPaddr:192.168.72.69 Prefix:24 Hostname:default-k8s-diff-port-038572 Clientid:01:52:54:00:42:47:f6}
	I0819 13:20:11.464415 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | domain default-k8s-diff-port-038572 has defined IP address 192.168.72.69 and MAC address 52:54:00:42:47:f6 in network mk-default-k8s-diff-port-038572
	I0819 13:20:11.464581 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHPort
	I0819 13:20:11.464764 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHKeyPath
	I0819 13:20:11.464942 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .GetSSHUsername
	I0819 13:20:11.465089 2152365 sshutil.go:53] new ssh client: &{IP:192.168.72.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/default-k8s-diff-port-038572/id_rsa Username:docker}
	I0819 13:20:11.620783 2152365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:11.641158 2152365 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:11.745067 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:20:11.783221 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:20:11.783260 2152365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:20:11.791134 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:20:11.848325 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:20:11.848359 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:20:11.955559 2152365 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:11.955670 2152365 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:20:12.022022 2152365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:20:12.892823 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.101637914s)
	I0819 13:20:12.892896 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.892909 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.892966 2152365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147858081s)
	I0819 13:20:12.893019 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893033 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893201 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893327 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893354 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893366 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893375 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893377 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893386 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893391 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.893399 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.893630 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893681 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.893704 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893677 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.893729 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.893657 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.899954 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.899970 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.900202 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.900220 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.900223 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972429 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972465 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.972842 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.972866 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.972882 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.972897 2152365 main.go:141] libmachine: Making call to close driver server
	I0819 13:20:12.972907 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) Calling .Close
	I0819 13:20:12.973181 2152365 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:20:12.973225 2152365 main.go:141] libmachine: (default-k8s-diff-port-038572) DBG | Closing plugin on server side
	I0819 13:20:12.973235 2152365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:20:12.973289 2152365 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-038572"
	I0819 13:20:12.975602 2152365 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:20:12.976614 2152365 addons.go:510] duration metric: took 1.587039687s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:20:15.704949 2153523 start.go:364] duration metric: took 2m34.676043713s to acquireMachinesLock for "embed-certs-823470"
	I0819 13:20:15.705034 2153523 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:15.705058 2153523 fix.go:54] fixHost starting: 
	I0819 13:20:15.705598 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:15.705649 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:15.726423 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0819 13:20:15.726876 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:15.727565 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:20:15.727603 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:15.727986 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:15.728273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:15.728457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:20:15.730123 2153523 fix.go:112] recreateIfNeeded on embed-certs-823470: state=Stopped err=<nil>
	I0819 13:20:15.730167 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	W0819 13:20:15.730354 2153523 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:15.732004 2153523 out.go:177] * Restarting existing kvm2 VM for "embed-certs-823470" ...
	I0819 13:20:11.621327 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:11.621751 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | unable to find current IP address of domain old-k8s-version-620467 in network mk-old-k8s-version-620467
	I0819 13:20:11.621783 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | I0819 13:20:11.621716 2154107 retry.go:31] will retry after 2.756598641s: waiting for machine to come up
	I0819 13:20:14.379562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380111 2152780 main.go:141] libmachine: (old-k8s-version-620467) Found IP for machine: 192.168.39.238
	I0819 13:20:14.380135 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserving static IP address...
	I0819 13:20:14.380149 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has current primary IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.380616 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.380648 2152780 main.go:141] libmachine: (old-k8s-version-620467) Reserved static IP address: 192.168.39.238
	I0819 13:20:14.380673 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | skip adding static IP to network mk-old-k8s-version-620467 - found existing host DHCP lease matching {name: "old-k8s-version-620467", mac: "52:54:00:88:ef:79", ip: "192.168.39.238"}
	I0819 13:20:14.380687 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Getting to WaitForSSH function...
	I0819 13:20:14.380699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Waiting for SSH to be available...
	I0819 13:20:14.383088 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383484 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.383516 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.383650 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH client type: external
	I0819 13:20:14.383682 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa (-rw-------)
	I0819 13:20:14.383720 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:14.383731 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | About to run SSH command:
	I0819 13:20:14.383745 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | exit 0
	I0819 13:20:14.508402 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:14.508766 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetConfigRaw
	I0819 13:20:14.509427 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.512053 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512504 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.512573 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.512759 2152780 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/config.json ...
	I0819 13:20:14.512944 2152780 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:14.512962 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:14.513159 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.515266 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515554 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.515588 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.515743 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.515901 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516039 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.516142 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.516308 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.516582 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.516597 2152780 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:14.628611 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:14.628650 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.628968 2152780 buildroot.go:166] provisioning hostname "old-k8s-version-620467"
	I0819 13:20:14.629003 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.629202 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.631844 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632148 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.632211 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.632516 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.632706 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632870 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.632975 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.633140 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.633383 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.633400 2152780 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-620467 && echo "old-k8s-version-620467" | sudo tee /etc/hostname
	I0819 13:20:14.759803 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-620467
	
	I0819 13:20:14.759840 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.763058 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763513 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.763553 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.763807 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:14.764027 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764208 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:14.764523 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:14.764729 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:14.764986 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:14.765013 2152780 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-620467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-620467/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-620467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:14.890791 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:14.890828 2152780 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:14.890861 2152780 buildroot.go:174] setting up certificates
	I0819 13:20:14.890877 2152780 provision.go:84] configureAuth start
	I0819 13:20:14.890890 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetMachineName
	I0819 13:20:14.891337 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:14.894171 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894527 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.894562 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.894695 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:14.897240 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.897791 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:14.897825 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:14.898011 2152780 provision.go:143] copyHostCerts
	I0819 13:20:14.898080 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:14.898104 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:14.898169 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:14.898317 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:14.898330 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:14.898360 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:14.898434 2152780 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:14.898444 2152780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:14.898472 2152780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:14.898542 2152780 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-620467 san=[127.0.0.1 192.168.39.238 localhost minikube old-k8s-version-620467]
	I0819 13:20:15.009796 2152780 provision.go:177] copyRemoteCerts
	I0819 13:20:15.009880 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:15.009931 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.012857 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013153 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.013187 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.013486 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.013699 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.013862 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.013990 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.104066 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:15.131682 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 13:20:15.158717 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:15.186486 2152780 provision.go:87] duration metric: took 295.591794ms to configureAuth
	I0819 13:20:15.186518 2152780 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:15.186739 2152780 config.go:182] Loaded profile config "old-k8s-version-620467": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 13:20:15.186833 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.189580 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.189997 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.190028 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.190212 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.190389 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190528 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.190646 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.190784 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.190958 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.190973 2152780 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:15.461836 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:15.461866 2152780 machine.go:96] duration metric: took 948.910153ms to provisionDockerMachine
	I0819 13:20:15.461878 2152780 start.go:293] postStartSetup for "old-k8s-version-620467" (driver="kvm2")
	I0819 13:20:15.461894 2152780 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:15.461913 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.462313 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:15.462348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.465294 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465707 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.465743 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.465949 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.466136 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.466302 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.466443 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.551428 2152780 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:15.556041 2152780 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:15.556063 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:15.556137 2152780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:15.556267 2152780 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:15.556410 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:15.565971 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:15.592103 2152780 start.go:296] duration metric: took 130.206992ms for postStartSetup
	I0819 13:20:15.592153 2152780 fix.go:56] duration metric: took 18.790655174s for fixHost
	I0819 13:20:15.592201 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.595223 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595583 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.595623 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.595769 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.596033 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596249 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.596391 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.596578 2152780 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:15.596767 2152780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0819 13:20:15.596780 2152780 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:15.704773 2152780 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073615.677156491
	
	I0819 13:20:15.704802 2152780 fix.go:216] guest clock: 1724073615.677156491
	I0819 13:20:15.704811 2152780 fix.go:229] Guest: 2024-08-19 13:20:15.677156491 +0000 UTC Remote: 2024-08-19 13:20:15.592158206 +0000 UTC m=+209.822213223 (delta=84.998285ms)
	I0819 13:20:15.704854 2152780 fix.go:200] guest clock delta is within tolerance: 84.998285ms
	I0819 13:20:15.704862 2152780 start.go:83] releasing machines lock for "old-k8s-version-620467", held for 18.903410478s
	I0819 13:20:15.704893 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.705211 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:15.708190 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708528 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.708558 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.708733 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709348 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709538 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .DriverName
	I0819 13:20:15.709631 2152780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:15.709682 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.709793 2152780 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:15.709821 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHHostname
	I0819 13:20:15.712366 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712759 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.712787 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712818 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.712943 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713113 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713309 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.713374 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:15.713407 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:15.713481 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.713573 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHPort
	I0819 13:20:15.713714 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHKeyPath
	I0819 13:20:15.713872 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetSSHUsername
	I0819 13:20:15.714033 2152780 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/old-k8s-version-620467/id_rsa Username:docker}
	I0819 13:20:15.732993 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Start
	I0819 13:20:15.733180 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring networks are active...
	I0819 13:20:15.734170 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network default is active
	I0819 13:20:15.734566 2153523 main.go:141] libmachine: (embed-certs-823470) Ensuring network mk-embed-certs-823470 is active
	I0819 13:20:15.734992 2153523 main.go:141] libmachine: (embed-certs-823470) Getting domain xml...
	I0819 13:20:15.735675 2153523 main.go:141] libmachine: (embed-certs-823470) Creating domain...
	I0819 13:20:15.818839 2152780 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:15.827091 2152780 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:15.989552 2152780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:15.995991 2152780 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:15.996067 2152780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:16.012594 2152780 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:16.012622 2152780 start.go:495] detecting cgroup driver to use...
	I0819 13:20:16.012691 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:16.030071 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:16.048651 2152780 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:16.048715 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:16.063500 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:16.078370 2152780 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:16.194353 2152780 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:16.347921 2152780 docker.go:233] disabling docker service ...
	I0819 13:20:16.347992 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:16.363528 2152780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:16.377515 2152780 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:16.526660 2152780 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:16.672207 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:16.686323 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:16.707555 2152780 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 13:20:16.707634 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.718679 2152780 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:16.718775 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.729446 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.740145 2152780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:16.751813 2152780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:16.767680 2152780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:16.779680 2152780 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:16.779754 2152780 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:16.798205 2152780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:16.812684 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:16.999666 2152780 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:17.159585 2152780 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:17.159673 2152780 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:17.165707 2152780 start.go:563] Will wait 60s for crictl version
	I0819 13:20:17.165781 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:17.170949 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:17.222170 2152780 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:17.222248 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.250627 2152780 ssh_runner.go:195] Run: crio --version
	I0819 13:20:17.278578 2152780 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 13:20:13.644589 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:15.645507 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:18.146247 2152365 node_ready.go:53] node "default-k8s-diff-port-038572" has status "Ready":"False"
	I0819 13:20:17.279632 2152780 main.go:141] libmachine: (old-k8s-version-620467) Calling .GetIP
	I0819 13:20:17.282855 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283256 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:ef:79", ip: ""} in network mk-old-k8s-version-620467: {Iface:virbr2 ExpiryTime:2024-08-19 14:20:08 +0000 UTC Type:0 Mac:52:54:00:88:ef:79 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:old-k8s-version-620467 Clientid:01:52:54:00:88:ef:79}
	I0819 13:20:17.283289 2152780 main.go:141] libmachine: (old-k8s-version-620467) DBG | domain old-k8s-version-620467 has defined IP address 192.168.39.238 and MAC address 52:54:00:88:ef:79 in network mk-old-k8s-version-620467
	I0819 13:20:17.283501 2152780 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:17.288251 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:17.301778 2152780 kubeadm.go:883] updating cluster {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:17.301950 2152780 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 13:20:17.302022 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:17.348389 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:17.348477 2152780 ssh_runner.go:195] Run: which lz4
	I0819 13:20:17.352759 2152780 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:17.357046 2152780 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:17.357075 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 13:20:19.106587 2152780 crio.go:462] duration metric: took 1.753868264s to copy over tarball
	I0819 13:20:19.106686 2152780 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:17.131501 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting to get IP...
	I0819 13:20:17.132786 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.133215 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.133322 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.133191 2154333 retry.go:31] will retry after 299.991818ms: waiting for machine to come up
	I0819 13:20:17.434764 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.435181 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.435207 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.435148 2154333 retry.go:31] will retry after 236.856343ms: waiting for machine to come up
	I0819 13:20:17.673895 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:17.674570 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:17.674599 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:17.674464 2154333 retry.go:31] will retry after 393.585081ms: waiting for machine to come up
	I0819 13:20:18.070571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.071042 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.071102 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.070993 2154333 retry.go:31] will retry after 544.808617ms: waiting for machine to come up
	I0819 13:20:18.617933 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:18.618430 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:18.618462 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:18.618383 2154333 retry.go:31] will retry after 679.633919ms: waiting for machine to come up
	I0819 13:20:19.299719 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.300300 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.300332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.300244 2154333 retry.go:31] will retry after 612.319499ms: waiting for machine to come up
	I0819 13:20:19.914955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:19.915514 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:19.915550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:19.915448 2154333 retry.go:31] will retry after 831.709673ms: waiting for machine to come up
	I0819 13:20:20.748907 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:20.749441 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:20.749470 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:20.749386 2154333 retry.go:31] will retry after 1.039050841s: waiting for machine to come up
	I0819 13:20:19.147118 2152365 node_ready.go:49] node "default-k8s-diff-port-038572" has status "Ready":"True"
	I0819 13:20:19.147148 2152365 node_ready.go:38] duration metric: took 7.505954816s for node "default-k8s-diff-port-038572" to be "Ready" ...
	I0819 13:20:19.147159 2152365 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:19.156331 2152365 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.164963 2152365 pod_ready.go:93] pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:19.164994 2152365 pod_ready.go:82] duration metric: took 8.624394ms for pod "coredns-6f6b679f8f-8tl4c" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:19.165007 2152365 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:21.171762 2152365 pod_ready.go:103] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:22.814104 2152365 pod_ready.go:93] pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.814140 2152365 pod_ready.go:82] duration metric: took 3.649123155s for pod "etcd-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.814157 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820468 2152365 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.820494 2152365 pod_ready.go:82] duration metric: took 6.328831ms for pod "kube-apiserver-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.820505 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826256 2152365 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.826288 2152365 pod_ready.go:82] duration metric: took 5.775237ms for pod "kube-controller-manager-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.826302 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832133 2152365 pod_ready.go:93] pod "kube-proxy-wnkk2" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.832182 2152365 pod_ready.go:82] duration metric: took 5.850742ms for pod "kube-proxy-wnkk2" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.832198 2152365 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837910 2152365 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:22.837938 2152365 pod_ready.go:82] duration metric: took 5.730926ms for pod "kube-scheduler-default-k8s-diff-port-038572" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.837952 2152365 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:22.098419 2152780 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.991694091s)
	I0819 13:20:22.098466 2152780 crio.go:469] duration metric: took 2.991838198s to extract the tarball
	I0819 13:20:22.098476 2152780 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:22.143215 2152780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:22.183884 2152780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 13:20:22.183917 2152780 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:22.183992 2152780 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.184012 2152780 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.184040 2152780 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 13:20:22.184050 2152780 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.183999 2152780 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.184002 2152780 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.184087 2152780 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.184133 2152780 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185819 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.185861 2152780 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 13:20:22.185868 2152780 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.185909 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.185940 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.185817 2152780 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.185825 2152780 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.355834 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.356969 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.361175 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.382402 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.393972 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.417644 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 13:20:22.451269 2152780 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 13:20:22.451335 2152780 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.451333 2152780 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 13:20:22.451374 2152780 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.451393 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.451428 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.479570 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:22.490369 2152780 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 13:20:22.490426 2152780 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.490484 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.519930 2152780 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 13:20:22.519990 2152780 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.520062 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.527794 2152780 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 13:20:22.527836 2152780 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.527922 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540287 2152780 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 13:20:22.540337 2152780 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 13:20:22.540362 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.540381 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.540423 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665204 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.665271 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.665317 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.665410 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.665441 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.665481 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.725932 2152780 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.821227 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:22.825977 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:22.826179 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:22.826276 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 13:20:22.826365 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 13:20:22.826517 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:22.908181 2152780 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 13:20:22.908249 2152780 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 13:20:22.908306 2152780 ssh_runner.go:195] Run: which crictl
	I0819 13:20:22.974136 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 13:20:23.008192 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 13:20:23.008282 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 13:20:23.008318 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 13:20:23.008372 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 13:20:23.008406 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 13:20:23.008416 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.041323 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 13:20:23.119173 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 13:20:23.119212 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.119216 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 13:20:23.119787 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 13:20:23.158596 2152780 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 13:20:23.193414 2152780 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 13:20:23.193510 2152780 cache_images.go:92] duration metric: took 1.009576721s to LoadCachedImages
	W0819 13:20:23.193613 2152780 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0819 13:20:23.193632 2152780 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.20.0 crio true true} ...
	I0819 13:20:23.193746 2152780 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-620467 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:23.193835 2152780 ssh_runner.go:195] Run: crio config
	I0819 13:20:23.255630 2152780 cni.go:84] Creating CNI manager for ""
	I0819 13:20:23.255661 2152780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:23.255677 2152780 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:23.255711 2152780 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-620467 NodeName:old-k8s-version-620467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 13:20:23.255897 2152780 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-620467"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:23.255980 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 13:20:23.269531 2152780 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:23.269612 2152780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:23.282542 2152780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0819 13:20:23.302755 2152780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:23.322417 2152780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 13:20:23.342984 2152780 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:23.347053 2152780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:23.360095 2152780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:23.497495 2152780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:23.517984 2152780 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467 for IP: 192.168.39.238
	I0819 13:20:23.518059 2152780 certs.go:194] generating shared ca certs ...
	I0819 13:20:23.518125 2152780 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:23.518359 2152780 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:23.518429 2152780 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:23.518442 2152780 certs.go:256] generating profile certs ...
	I0819 13:20:23.518572 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/client.key
	I0819 13:20:23.518651 2152780 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key.53723c14
	I0819 13:20:23.518701 2152780 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key
	I0819 13:20:23.518856 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:23.518895 2152780 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:23.518909 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:23.518941 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:23.518969 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:23.519009 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:23.519073 2152780 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:23.519828 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:23.574690 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:23.616732 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:23.659531 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:23.706724 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 13:20:23.742394 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:20:23.774793 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:23.801731 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/old-k8s-version-620467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:23.827344 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:23.855338 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:23.880700 2152780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:23.906324 2152780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:23.923792 2152780 ssh_runner.go:195] Run: openssl version
	I0819 13:20:23.929722 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:23.940623 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946214 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.946316 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:23.952580 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:23.963550 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:23.974639 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979429 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.979515 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:23.985465 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:23.996822 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:24.008876 2152780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013845 2152780 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.013909 2152780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:24.019736 2152780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:24.032573 2152780 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:24.038620 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:24.046070 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:24.052573 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:24.059242 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:24.065487 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:24.072116 2152780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:24.078756 2152780 kubeadm.go:392] StartCluster: {Name:old-k8s-version-620467 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-620467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:24.078878 2152780 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:24.078950 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.122078 2152780 cri.go:89] found id: ""
	I0819 13:20:24.122170 2152780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:24.133909 2152780 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:24.133932 2152780 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:24.133989 2152780 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:24.144440 2152780 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:24.145449 2152780 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-620467" does not appear in /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:20:24.146113 2152780 kubeconfig.go:62] /home/jenkins/minikube-integration/19479-2086935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-620467" cluster setting kubeconfig missing "old-k8s-version-620467" context setting]
	I0819 13:20:24.147113 2152780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:24.226327 2152780 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:24.236890 2152780 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.238
	I0819 13:20:24.236922 2152780 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:24.236936 2152780 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:24.236978 2152780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:24.274144 2152780 cri.go:89] found id: ""
	I0819 13:20:24.274220 2152780 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:24.291049 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:24.302678 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:24.302699 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:24.302746 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:24.312055 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:24.312122 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:24.321518 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:24.330507 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:24.330570 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:24.340192 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.349855 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:24.349914 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:24.359243 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:24.369020 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:24.369100 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:24.379184 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:24.389785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:24.524009 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.159266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.413924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.519529 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:25.635520 2152780 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:25.635627 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:21.790800 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:21.791370 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:21.791400 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:21.791324 2154333 retry.go:31] will retry after 1.360798876s: waiting for machine to come up
	I0819 13:20:23.153492 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:23.153913 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:23.153942 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:23.153859 2154333 retry.go:31] will retry after 2.180186803s: waiting for machine to come up
	I0819 13:20:25.337305 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:25.337871 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:25.337903 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:25.337719 2154333 retry.go:31] will retry after 1.933341809s: waiting for machine to come up
	I0819 13:20:24.846640 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:27.344995 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:26.135764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:26.636618 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.136024 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.636046 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.135975 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:28.636411 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.136478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:29.636585 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.136183 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:30.636717 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:27.273276 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:27.273766 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:27.273796 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:27.273706 2154333 retry.go:31] will retry after 3.242792205s: waiting for machine to come up
	I0819 13:20:30.520205 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:30.520644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | unable to find current IP address of domain embed-certs-823470 in network mk-embed-certs-823470
	I0819 13:20:30.520679 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | I0819 13:20:30.520585 2154333 retry.go:31] will retry after 2.921954134s: waiting for machine to come up
	I0819 13:20:29.843436 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:31.843702 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.021186 2151869 start.go:364] duration metric: took 51.034331815s to acquireMachinesLock for "no-preload-441627"
	I0819 13:20:35.021266 2151869 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:20:35.021275 2151869 fix.go:54] fixHost starting: 
	I0819 13:20:35.021747 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:20:35.021788 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:20:35.039025 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0819 13:20:35.039480 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:20:35.039951 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:20:35.039978 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:20:35.040398 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:20:35.040610 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:35.040779 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:20:35.042642 2151869 fix.go:112] recreateIfNeeded on no-preload-441627: state=Stopped err=<nil>
	I0819 13:20:35.042670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	W0819 13:20:35.042852 2151869 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:20:35.044620 2151869 out.go:177] * Restarting existing kvm2 VM for "no-preload-441627" ...
	I0819 13:20:31.136686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:31.636239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.136372 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:32.635927 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.136039 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.635721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.136156 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:34.636069 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.136557 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:35.636478 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:33.446149 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.446796 2153523 main.go:141] libmachine: (embed-certs-823470) Found IP for machine: 192.168.61.50
	I0819 13:20:33.446825 2153523 main.go:141] libmachine: (embed-certs-823470) Reserving static IP address...
	I0819 13:20:33.446843 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has current primary IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.447326 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.447357 2153523 main.go:141] libmachine: (embed-certs-823470) Reserved static IP address: 192.168.61.50
	I0819 13:20:33.447375 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | skip adding static IP to network mk-embed-certs-823470 - found existing host DHCP lease matching {name: "embed-certs-823470", mac: "52:54:00:1d:ac:f9", ip: "192.168.61.50"}
	I0819 13:20:33.447390 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Getting to WaitForSSH function...
	I0819 13:20:33.447402 2153523 main.go:141] libmachine: (embed-certs-823470) Waiting for SSH to be available...
	I0819 13:20:33.449644 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.449934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.449966 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.450063 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH client type: external
	I0819 13:20:33.450080 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa (-rw-------)
	I0819 13:20:33.450128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:33.450148 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | About to run SSH command:
	I0819 13:20:33.450160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | exit 0
	I0819 13:20:33.580225 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:33.580660 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetConfigRaw
	I0819 13:20:33.581310 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.583950 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584345 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.584382 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.584697 2153523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/config.json ...
	I0819 13:20:33.584932 2153523 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:33.584961 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:33.585138 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.587429 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587735 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.587768 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.587903 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.588123 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588333 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.588507 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.588675 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.588873 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.588886 2153523 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:33.696497 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:33.696528 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696770 2153523 buildroot.go:166] provisioning hostname "embed-certs-823470"
	I0819 13:20:33.696788 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.696995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.699605 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.699955 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.699991 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.700129 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.700374 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700531 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.700668 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.700822 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.701007 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.701021 2153523 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-823470 && echo "embed-certs-823470" | sudo tee /etc/hostname
	I0819 13:20:33.826398 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-823470
	
	I0819 13:20:33.826444 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.829353 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.829751 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.829791 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.830030 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:33.830215 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830383 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:33.830504 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:33.830662 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:33.830879 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:33.830896 2153523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-823470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-823470/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-823470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:33.952843 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:33.952881 2153523 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:33.952908 2153523 buildroot.go:174] setting up certificates
	I0819 13:20:33.952920 2153523 provision.go:84] configureAuth start
	I0819 13:20:33.952930 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetMachineName
	I0819 13:20:33.953214 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:33.955916 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956302 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.956332 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.956520 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:33.958787 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959101 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:33.959128 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:33.959257 2153523 provision.go:143] copyHostCerts
	I0819 13:20:33.959321 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:33.959341 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:33.959391 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:33.959505 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:33.959516 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:33.959537 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:33.959593 2153523 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:33.959600 2153523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:33.959618 2153523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:33.959670 2153523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.embed-certs-823470 san=[127.0.0.1 192.168.61.50 embed-certs-823470 localhost minikube]
	I0819 13:20:34.309596 2153523 provision.go:177] copyRemoteCerts
	I0819 13:20:34.309658 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:34.309693 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.312341 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312722 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.312757 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.312897 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.313100 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.313221 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.313349 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.400305 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:34.423939 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:34.447427 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:34.474337 2153523 provision.go:87] duration metric: took 521.386141ms to configureAuth
	I0819 13:20:34.474374 2153523 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:34.474559 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:34.474655 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.477282 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477721 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.477754 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.477884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.478108 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478296 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.478443 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.478661 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.478864 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.478886 2153523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:34.768186 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:34.768232 2153523 machine.go:96] duration metric: took 1.183282981s to provisionDockerMachine
	I0819 13:20:34.768244 2153523 start.go:293] postStartSetup for "embed-certs-823470" (driver="kvm2")
	I0819 13:20:34.768255 2153523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:34.768273 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:34.768593 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:34.768626 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.771022 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771451 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.771485 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.771636 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.771820 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.771979 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.772095 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:34.862868 2153523 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:34.867619 2153523 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:34.867659 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:34.867746 2153523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:34.867837 2153523 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:34.867931 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:34.877409 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:34.907227 2153523 start.go:296] duration metric: took 138.963836ms for postStartSetup
	I0819 13:20:34.907285 2153523 fix.go:56] duration metric: took 19.202241044s for fixHost
	I0819 13:20:34.907315 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:34.909900 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910238 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:34.910273 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:34.910466 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:34.910671 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910832 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:34.910952 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:34.911132 2153523 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:34.911306 2153523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0819 13:20:34.911316 2153523 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:35.020989 2153523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073634.971972974
	
	I0819 13:20:35.021036 2153523 fix.go:216] guest clock: 1724073634.971972974
	I0819 13:20:35.021046 2153523 fix.go:229] Guest: 2024-08-19 13:20:34.971972974 +0000 UTC Remote: 2024-08-19 13:20:34.907292264 +0000 UTC m=+174.021390970 (delta=64.68071ms)
	I0819 13:20:35.021076 2153523 fix.go:200] guest clock delta is within tolerance: 64.68071ms
	I0819 13:20:35.021082 2153523 start.go:83] releasing machines lock for "embed-certs-823470", held for 19.316080353s
	I0819 13:20:35.021111 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.021414 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:35.024146 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024613 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.024651 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.024793 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025291 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025478 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:20:35.025561 2153523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:35.025612 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.025736 2153523 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:35.025759 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:20:35.028550 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028571 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.028973 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029003 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029031 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:35.029062 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:35.029272 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029359 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:20:35.029457 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029551 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:20:35.029597 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029723 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.029741 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:20:35.029893 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:20:35.137494 2153523 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:35.145726 2153523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:35.296420 2153523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:35.303481 2153523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:35.303566 2153523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:35.323781 2153523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:35.323813 2153523 start.go:495] detecting cgroup driver to use...
	I0819 13:20:35.323893 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:35.343648 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:35.357864 2153523 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:35.357920 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:35.371911 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:35.388107 2153523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:35.496871 2153523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:35.635989 2153523 docker.go:233] disabling docker service ...
	I0819 13:20:35.636061 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:35.651531 2153523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:35.665374 2153523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:35.810969 2153523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:35.045992 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Start
	I0819 13:20:35.046198 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring networks are active...
	I0819 13:20:35.046943 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network default is active
	I0819 13:20:35.047261 2151869 main.go:141] libmachine: (no-preload-441627) Ensuring network mk-no-preload-441627 is active
	I0819 13:20:35.047846 2151869 main.go:141] libmachine: (no-preload-441627) Getting domain xml...
	I0819 13:20:35.048719 2151869 main.go:141] libmachine: (no-preload-441627) Creating domain...
	I0819 13:20:36.329569 2151869 main.go:141] libmachine: (no-preload-441627) Waiting to get IP...
	I0819 13:20:36.330479 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.330862 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.330953 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.330842 2154491 retry.go:31] will retry after 214.344142ms: waiting for machine to come up
	I0819 13:20:35.931109 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:35.945746 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:35.963978 2153523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:35.964062 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.973860 2153523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:35.973944 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.983833 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:35.993733 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.003724 2153523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:36.014327 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.024944 2153523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.043960 2153523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:36.054444 2153523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:36.064239 2153523 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:36.064325 2153523 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:36.082508 2153523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:36.092677 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:36.216592 2153523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:36.368516 2153523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:36.368599 2153523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:36.373579 2153523 start.go:563] Will wait 60s for crictl version
	I0819 13:20:36.373630 2153523 ssh_runner.go:195] Run: which crictl
	I0819 13:20:36.377433 2153523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:36.416842 2153523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:36.416933 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.445307 2153523 ssh_runner.go:195] Run: crio --version
	I0819 13:20:36.480522 2153523 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:33.845335 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:35.847273 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:36.135939 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.136635 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:37.635748 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.136679 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:38.636322 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.136560 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:39.635772 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.135797 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.635764 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:36.481770 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetIP
	I0819 13:20:36.485210 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485665 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:20:36.485697 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:20:36.485956 2153523 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:36.490383 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:36.504020 2153523 kubeadm.go:883] updating cluster {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:36.504151 2153523 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:36.504236 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:36.540784 2153523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:36.540871 2153523 ssh_runner.go:195] Run: which lz4
	I0819 13:20:36.545693 2153523 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 13:20:36.550278 2153523 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 13:20:36.550307 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 13:20:38.008380 2153523 crio.go:462] duration metric: took 1.462725002s to copy over tarball
	I0819 13:20:38.008468 2153523 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 13:20:40.267095 2153523 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258591648s)
	I0819 13:20:40.267126 2153523 crio.go:469] duration metric: took 2.258713555s to extract the tarball
	I0819 13:20:40.267133 2153523 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 13:20:40.307080 2153523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:40.350343 2153523 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 13:20:40.350368 2153523 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:20:40.350377 2153523 kubeadm.go:934] updating node { 192.168.61.50 8443 v1.31.0 crio true true} ...
	I0819 13:20:40.350504 2153523 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-823470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:20:40.350594 2153523 ssh_runner.go:195] Run: crio config
	I0819 13:20:40.397534 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:40.397560 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:40.397582 2153523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:20:40.397612 2153523 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.50 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-823470 NodeName:embed-certs-823470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:20:40.397804 2153523 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-823470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:20:40.397880 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:20:40.409664 2153523 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:20:40.409738 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:20:40.420526 2153523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:20:40.437016 2153523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:20:40.454222 2153523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 13:20:40.471532 2153523 ssh_runner.go:195] Run: grep 192.168.61.50	control-plane.minikube.internal$ /etc/hosts
	I0819 13:20:40.475436 2153523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:40.487570 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:40.612612 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:20:40.630016 2153523 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470 for IP: 192.168.61.50
	I0819 13:20:40.630049 2153523 certs.go:194] generating shared ca certs ...
	I0819 13:20:40.630073 2153523 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:20:40.630265 2153523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:20:40.630329 2153523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:20:40.630343 2153523 certs.go:256] generating profile certs ...
	I0819 13:20:40.630457 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/client.key
	I0819 13:20:40.630553 2153523 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key.208ee791
	I0819 13:20:40.630606 2153523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key
	I0819 13:20:40.630756 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:20:40.630812 2153523 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:20:40.630824 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:20:40.630859 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:20:40.630891 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:20:40.630915 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:20:40.630959 2153523 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:40.631651 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:20:40.660149 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:20:40.699120 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:20:40.726336 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:20:40.765233 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 13:20:40.804970 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 13:20:40.846737 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:20:40.871358 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/embed-certs-823470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 13:20:40.896688 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:20:40.920071 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:20:36.547335 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.547819 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.547855 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.547769 2154491 retry.go:31] will retry after 344.710336ms: waiting for machine to come up
	I0819 13:20:36.894655 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:36.895213 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:36.895249 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:36.895169 2154491 retry.go:31] will retry after 301.307077ms: waiting for machine to come up
	I0819 13:20:37.197664 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.198250 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.198286 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.198182 2154491 retry.go:31] will retry after 516.927721ms: waiting for machine to come up
	I0819 13:20:37.716831 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:37.717323 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:37.717359 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:37.717279 2154491 retry.go:31] will retry after 679.982986ms: waiting for machine to come up
	I0819 13:20:38.398678 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:38.399120 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:38.399157 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:38.399072 2154491 retry.go:31] will retry after 822.30179ms: waiting for machine to come up
	I0819 13:20:39.223045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:39.223575 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:39.223609 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:39.223519 2154491 retry.go:31] will retry after 936.040478ms: waiting for machine to come up
	I0819 13:20:40.161047 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:40.161593 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:40.161624 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:40.161538 2154491 retry.go:31] will retry after 1.301988289s: waiting for machine to come up
	I0819 13:20:38.347577 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:40.845179 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:42.845598 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:41.136326 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:41.635949 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.136274 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:42.636124 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.135998 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.636101 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.136242 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:44.635713 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.135796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:45.635702 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:40.943162 2153523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:20:40.969648 2153523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:20:40.988367 2153523 ssh_runner.go:195] Run: openssl version
	I0819 13:20:40.994353 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:20:41.006057 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010833 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.010908 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:20:41.016665 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:20:41.029047 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:20:41.040384 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044690 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.044755 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:20:41.050358 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:20:41.061626 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:20:41.073480 2153523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077867 2153523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.077941 2153523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:20:41.083685 2153523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:20:41.096051 2153523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:20:41.100589 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:20:41.106791 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:20:41.112757 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:20:41.119083 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:20:41.125060 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:20:41.130528 2153523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:20:41.136615 2153523 kubeadm.go:392] StartCluster: {Name:embed-certs-823470 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-823470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:20:41.136781 2153523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:20:41.136848 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.184735 2153523 cri.go:89] found id: ""
	I0819 13:20:41.184820 2153523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:20:41.195134 2153523 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:20:41.195162 2153523 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:20:41.195215 2153523 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:20:41.205105 2153523 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:20:41.206197 2153523 kubeconfig.go:125] found "embed-certs-823470" server: "https://192.168.61.50:8443"
	I0819 13:20:41.208434 2153523 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:20:41.218028 2153523 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.50
	I0819 13:20:41.218059 2153523 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:20:41.218073 2153523 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:20:41.218128 2153523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:20:41.256254 2153523 cri.go:89] found id: ""
	I0819 13:20:41.256344 2153523 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:20:41.273223 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:20:41.283149 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:20:41.283169 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:20:41.283228 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:20:41.292541 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:20:41.292624 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:20:41.302453 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:20:41.312032 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:20:41.312090 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:20:41.322411 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.331738 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:20:41.331817 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:20:41.342148 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:20:41.353410 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:20:41.353506 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:20:41.365035 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:20:41.378556 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:41.502298 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.511293 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.008951961s)
	I0819 13:20:42.511332 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.728318 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.806475 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:42.889810 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:20:42.889917 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.390790 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.890775 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:43.939414 2153523 api_server.go:72] duration metric: took 1.049613567s to wait for apiserver process to appear ...
	I0819 13:20:43.939456 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:20:43.939495 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:43.940151 2153523 api_server.go:269] stopped: https://192.168.61.50:8443/healthz: Get "https://192.168.61.50:8443/healthz": dial tcp 192.168.61.50:8443: connect: connection refused
	I0819 13:20:44.439965 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:41.465077 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:41.465533 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:41.465577 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:41.465512 2154491 retry.go:31] will retry after 1.34269663s: waiting for machine to come up
	I0819 13:20:42.810485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:42.810988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:42.811017 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:42.810935 2154491 retry.go:31] will retry after 2.118455014s: waiting for machine to come up
	I0819 13:20:44.930721 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:44.931183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:44.931225 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:44.931151 2154491 retry.go:31] will retry after 2.224941461s: waiting for machine to come up
	I0819 13:20:44.846237 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:47.344303 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:46.957397 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.957427 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:46.957459 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:46.997895 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:20:46.997932 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:20:47.440580 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.444945 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.444984 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:47.939553 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:47.946543 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:47.946576 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.440206 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.446075 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.446114 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:48.939602 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:48.944480 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:48.944526 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.440053 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.444461 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.444502 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:49.940036 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:49.944425 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:20:49.944464 2153523 api_server.go:103] status: https://192.168.61.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:20:50.440006 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:20:50.445299 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:20:50.451900 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:20:50.451930 2153523 api_server.go:131] duration metric: took 6.512455813s to wait for apiserver health ...
	I0819 13:20:50.451941 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:20:50.451948 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:20:50.453835 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:20:46.135706 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:46.635815 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.136558 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:47.636410 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.136300 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:48.636143 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.136463 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:49.635682 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.135692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.635841 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:50.454977 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:20:50.466694 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:20:50.484966 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:20:50.507697 2153523 system_pods.go:59] 8 kube-system pods found
	I0819 13:20:50.507738 2153523 system_pods.go:61] "coredns-6f6b679f8f-rxjvv" [7eae7e32-b4c3-4941-9241-16cdde1007fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:20:50.507750 2153523 system_pods.go:61] "etcd-embed-certs-823470" [e2684e0e-8c4a-40da-ae5b-aa7f3a077e45] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:20:50.507761 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [c438fbda-c1f4-49f8-8979-1194d91cdeab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:20:50.507770 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [528e4b3e-b3fd-4455-bb4f-79b4bbdcb434] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:20:50.507778 2153523 system_pods.go:61] "kube-proxy-jcnwd" [b5302230-969d-409e-8aba-98ff971e9e79] Running
	I0819 13:20:50.507787 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [213067c8-73c6-4eb8-b9dc-f4ea981421f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:20:50.507798 2153523 system_pods.go:61] "metrics-server-6867b74b74-8dwpf" [7e3ab377-075e-48d8-a9ce-191ba41e3ed7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:20:50.507812 2153523 system_pods.go:61] "storage-provisioner" [4dd080d9-174f-4c0c-9b06-39e98a71672b] Running
	I0819 13:20:50.507821 2153523 system_pods.go:74] duration metric: took 22.833545ms to wait for pod list to return data ...
	I0819 13:20:50.507831 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:20:50.512822 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:20:50.512850 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:20:50.512862 2153523 node_conditions.go:105] duration metric: took 5.023718ms to run NodePressure ...
	I0819 13:20:50.512877 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:20:50.853595 2153523 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859172 2153523 kubeadm.go:739] kubelet initialised
	I0819 13:20:50.859200 2153523 kubeadm.go:740] duration metric: took 5.576224ms waiting for restarted kubelet to initialise ...
	I0819 13:20:50.859211 2153523 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:20:50.865571 2153523 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.871631 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871652 2153523 pod_ready.go:82] duration metric: took 6.059239ms for pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.871660 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "coredns-6f6b679f8f-rxjvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.871666 2153523 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.875242 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875268 2153523 pod_ready.go:82] duration metric: took 3.594665ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.875279 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "etcd-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.875287 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.879429 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879460 2153523 pod_ready.go:82] duration metric: took 4.156713ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.879471 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.879485 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:50.887625 2153523 pod_ready.go:98] node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887647 2153523 pod_ready.go:82] duration metric: took 8.149859ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	E0819 13:20:50.887655 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-823470" hosting pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-823470" has status "Ready":"False"
	I0819 13:20:50.887662 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:47.157196 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:47.157589 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:47.157616 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:47.157532 2154491 retry.go:31] will retry after 3.412547504s: waiting for machine to come up
	I0819 13:20:50.571896 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:50.572460 2151869 main.go:141] libmachine: (no-preload-441627) DBG | unable to find current IP address of domain no-preload-441627 in network mk-no-preload-441627
	I0819 13:20:50.572485 2151869 main.go:141] libmachine: (no-preload-441627) DBG | I0819 13:20:50.572401 2154491 retry.go:31] will retry after 3.498419112s: waiting for machine to come up
	I0819 13:20:49.345181 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.346188 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:51.136307 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.136425 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:52.636034 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.136443 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:53.636129 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.135695 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:54.635860 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.136481 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:55.635655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:51.287880 2153523 pod_ready.go:93] pod "kube-proxy-jcnwd" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:51.287908 2153523 pod_ready.go:82] duration metric: took 400.238614ms for pod "kube-proxy-jcnwd" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:51.287922 2153523 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:53.294631 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.803650 2153523 pod_ready.go:103] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:54.073302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073795 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has current primary IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.073819 2151869 main.go:141] libmachine: (no-preload-441627) Found IP for machine: 192.168.50.161
	I0819 13:20:54.073833 2151869 main.go:141] libmachine: (no-preload-441627) Reserving static IP address...
	I0819 13:20:54.074302 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.074337 2151869 main.go:141] libmachine: (no-preload-441627) Reserved static IP address: 192.168.50.161
	I0819 13:20:54.074356 2151869 main.go:141] libmachine: (no-preload-441627) DBG | skip adding static IP to network mk-no-preload-441627 - found existing host DHCP lease matching {name: "no-preload-441627", mac: "52:54:00:f8:93:04", ip: "192.168.50.161"}
	I0819 13:20:54.074382 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Getting to WaitForSSH function...
	I0819 13:20:54.074398 2151869 main.go:141] libmachine: (no-preload-441627) Waiting for SSH to be available...
	I0819 13:20:54.076312 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076686 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.076719 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.076879 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH client type: external
	I0819 13:20:54.076920 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Using SSH private key: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa (-rw-------)
	I0819 13:20:54.076961 2151869 main.go:141] libmachine: (no-preload-441627) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 13:20:54.076975 2151869 main.go:141] libmachine: (no-preload-441627) DBG | About to run SSH command:
	I0819 13:20:54.076988 2151869 main.go:141] libmachine: (no-preload-441627) DBG | exit 0
	I0819 13:20:54.208334 2151869 main.go:141] libmachine: (no-preload-441627) DBG | SSH cmd err, output: <nil>: 
	I0819 13:20:54.208710 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetConfigRaw
	I0819 13:20:54.209373 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.211827 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212176 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.212210 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.212409 2151869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/config.json ...
	I0819 13:20:54.212587 2151869 machine.go:93] provisionDockerMachine start ...
	I0819 13:20:54.212607 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:54.212810 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.214965 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215290 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.215315 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.215435 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.215615 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215735 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.215887 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.216050 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.216244 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.216256 2151869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:20:54.332379 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 13:20:54.332411 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332718 2151869 buildroot.go:166] provisioning hostname "no-preload-441627"
	I0819 13:20:54.332753 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.332948 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.335520 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.335930 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.335959 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.336119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.336357 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336513 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.336654 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.336918 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.337084 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.337104 2151869 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-441627 && echo "no-preload-441627" | sudo tee /etc/hostname
	I0819 13:20:54.466127 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-441627
	
	I0819 13:20:54.466178 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.469045 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469434 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.469482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.469648 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:54.469836 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470018 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:54.470174 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:54.470406 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:54.470654 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:54.470676 2151869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-441627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-441627/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-441627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:20:54.595063 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:20:54.595111 2151869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19479-2086935/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-2086935/.minikube}
	I0819 13:20:54.595162 2151869 buildroot.go:174] setting up certificates
	I0819 13:20:54.595178 2151869 provision.go:84] configureAuth start
	I0819 13:20:54.595191 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetMachineName
	I0819 13:20:54.595554 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:54.598703 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599031 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.599060 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.599202 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:54.601785 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602203 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:54.602231 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:54.602593 2151869 provision.go:143] copyHostCerts
	I0819 13:20:54.602661 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem, removing ...
	I0819 13:20:54.602680 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem
	I0819 13:20:54.602745 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.pem (1078 bytes)
	I0819 13:20:54.602878 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem, removing ...
	I0819 13:20:54.602892 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem
	I0819 13:20:54.602922 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/cert.pem (1123 bytes)
	I0819 13:20:54.603010 2151869 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem, removing ...
	I0819 13:20:54.603022 2151869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem
	I0819 13:20:54.603043 2151869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-2086935/.minikube/key.pem (1675 bytes)
	I0819 13:20:54.603116 2151869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem org=jenkins.no-preload-441627 san=[127.0.0.1 192.168.50.161 localhost minikube no-preload-441627]
	I0819 13:20:55.027338 2151869 provision.go:177] copyRemoteCerts
	I0819 13:20:55.027408 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:20:55.027438 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.030073 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.030550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.030826 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.031038 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.031231 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.031436 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.122187 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 13:20:55.147222 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:20:55.170341 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:20:55.195484 2151869 provision.go:87] duration metric: took 600.289644ms to configureAuth
	I0819 13:20:55.195516 2151869 buildroot.go:189] setting minikube options for container-runtime
	I0819 13:20:55.195757 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:20:55.195862 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.198859 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199267 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.199294 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.199448 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.199666 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199834 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.199941 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.200126 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.200340 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.200357 2151869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 13:20:55.485890 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 13:20:55.485920 2151869 machine.go:96] duration metric: took 1.273319956s to provisionDockerMachine
	I0819 13:20:55.485931 2151869 start.go:293] postStartSetup for "no-preload-441627" (driver="kvm2")
	I0819 13:20:55.485941 2151869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:20:55.485957 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.486340 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:20:55.486379 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.489099 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489514 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.489550 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.489657 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.489866 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.490073 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.490298 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.582461 2151869 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:20:55.587827 2151869 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 13:20:55.587857 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/addons for local assets ...
	I0819 13:20:55.587928 2151869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-2086935/.minikube/files for local assets ...
	I0819 13:20:55.588015 2151869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem -> 20941932.pem in /etc/ssl/certs
	I0819 13:20:55.588115 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:20:55.600309 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:20:55.629229 2151869 start.go:296] duration metric: took 143.280186ms for postStartSetup
	I0819 13:20:55.629281 2151869 fix.go:56] duration metric: took 20.608007215s for fixHost
	I0819 13:20:55.629306 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.632299 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632733 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.632786 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.632921 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.633155 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633422 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.633640 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.633850 2151869 main.go:141] libmachine: Using SSH client type: native
	I0819 13:20:55.634098 2151869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0819 13:20:55.634117 2151869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 13:20:55.752998 2151869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724073655.727816487
	
	I0819 13:20:55.753028 2151869 fix.go:216] guest clock: 1724073655.727816487
	I0819 13:20:55.753039 2151869 fix.go:229] Guest: 2024-08-19 13:20:55.727816487 +0000 UTC Remote: 2024-08-19 13:20:55.629285878 +0000 UTC m=+354.230914724 (delta=98.530609ms)
	I0819 13:20:55.753080 2151869 fix.go:200] guest clock delta is within tolerance: 98.530609ms
	I0819 13:20:55.753087 2151869 start.go:83] releasing machines lock for "no-preload-441627", held for 20.731851858s
	I0819 13:20:55.753119 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.753381 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:55.756443 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.756954 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.756986 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.757124 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757639 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757822 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:20:55.757927 2151869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:20:55.757999 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.758066 2151869 ssh_runner.go:195] Run: cat /version.json
	I0819 13:20:55.758098 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:20:55.760779 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761023 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761162 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761183 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761338 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761459 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:55.761482 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:55.761515 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761670 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:20:55.761756 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.761845 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:20:55.761895 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.762041 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:20:55.762192 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:20:55.846241 2151869 ssh_runner.go:195] Run: systemctl --version
	I0819 13:20:55.868048 2151869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 13:20:56.013592 2151869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 13:20:56.020322 2151869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 13:20:56.020403 2151869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:20:56.045197 2151869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 13:20:56.045234 2151869 start.go:495] detecting cgroup driver to use...
	I0819 13:20:56.045325 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 13:20:56.062336 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 13:20:56.078399 2151869 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:20:56.078476 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:20:56.094735 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:20:56.110097 2151869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:20:56.238433 2151869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:20:56.405540 2151869 docker.go:233] disabling docker service ...
	I0819 13:20:56.405638 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:20:56.421604 2151869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:20:56.434166 2151869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:20:56.565116 2151869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:20:56.688471 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:20:56.709293 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:20:56.727575 2151869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 13:20:56.727658 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.738492 2151869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 13:20:56.738559 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.748563 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.758372 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.768341 2151869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:20:56.778351 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.788072 2151869 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.805215 2151869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 13:20:56.815496 2151869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:20:56.825033 2151869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 13:20:56.825087 2151869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 13:20:56.839732 2151869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:20:56.851166 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:20:56.976815 2151869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 13:20:57.154020 2151869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 13:20:57.154114 2151869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 13:20:57.159214 2151869 start.go:563] Will wait 60s for crictl version
	I0819 13:20:57.159282 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.163467 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:20:57.209305 2151869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 13:20:57.209417 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.239241 2151869 ssh_runner.go:195] Run: crio --version
	I0819 13:20:57.269905 2151869 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 13:20:53.844042 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:55.845906 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:56.135714 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:56.635996 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.135847 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.635970 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.136650 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:58.636316 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.136506 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:59.636057 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.136545 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:00.636030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:20:57.297660 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:20:57.297685 2153523 pod_ready.go:82] duration metric: took 6.009754619s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:57.297696 2153523 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	I0819 13:20:59.304626 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:20:57.270865 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetIP
	I0819 13:20:57.273851 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274158 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:20:57.274186 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:20:57.274425 2151869 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 13:20:57.278843 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:20:57.296818 2151869 kubeadm.go:883] updating cluster {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:20:57.296988 2151869 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 13:20:57.297042 2151869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:20:57.345101 2151869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 13:20:57.345130 2151869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 13:20:57.345185 2151869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.345236 2151869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.345463 2151869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.345553 2151869 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.345586 2151869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.345482 2151869 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 13:20:57.345492 2151869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.345496 2151869 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.347612 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.347625 2151869 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.347661 2151869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.347698 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.347797 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.348298 2151869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.348418 2151869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.348485 2151869 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 13:20:57.521291 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.528953 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.530354 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.538749 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.539434 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.564916 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 13:20:57.572522 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.596418 2151869 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 13:20:57.596483 2151869 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.596536 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.632436 2151869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 13:20:57.690627 2151869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.690666 2151869 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 13:20:57.690725 2151869 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.690677 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690776 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.690568 2151869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 13:20:57.690827 2151869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.690856 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.711488 2151869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 13:20:57.711546 2151869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.711597 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842702 2151869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 13:20:57.842794 2151869 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 13:20:57.842819 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.842815 2151869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.842845 2151869 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.842917 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842923 2151869 ssh_runner.go:195] Run: which crictl
	I0819 13:20:57.842933 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.842896 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.842769 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.863031 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:57.959879 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:57.959965 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:57.960047 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:57.960061 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:57.960110 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:57.960298 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:57.994432 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.109438 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.124785 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 13:20:58.124845 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 13:20:58.126704 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 13:20:58.126791 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 13:20:58.126801 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 13:20:58.184360 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:20:58.198177 2151869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 13:20:58.280314 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 13:20:58.280424 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.280471 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 13:20:58.280579 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:20:58.291181 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 13:20:58.291246 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 13:20:58.291278 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:20:58.291295 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 13:20:58.291333 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:20:58.291370 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:20:58.329433 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 13:20:58.329497 2151869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 13:20:58.329553 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:20:58.329588 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 13:20:58.329597 2151869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:20:58.329602 2151869 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329640 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 13:20:58.329660 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 13:20:58.329679 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 13:20:58.329704 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 13:20:58.329747 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 13:20:58.333814 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 13:21:00.287189 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.957515939s)
	I0819 13:21:00.287247 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 13:21:00.287270 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287208 2151869 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.957596318s)
	I0819 13:21:00.287323 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 13:21:00.287327 2151869 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 13:20:58.346708 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:00.844688 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.845156 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:01.136475 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.635872 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.136128 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:02.636458 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.136471 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:03.636080 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.135958 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:04.636494 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.136279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:05.636642 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:01.803995 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:03.804110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:05.805252 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:02.037910 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.750558372s)
	I0819 13:21:02.037948 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 13:21:02.037977 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:02.038031 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 13:21:04.606941 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.568877079s)
	I0819 13:21:04.606974 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 13:21:04.607003 2151869 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:04.607057 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 13:21:05.346854 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.108643 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:06.136247 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:06.636648 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.136212 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:07.636710 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.136112 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.636655 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.136349 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:09.635778 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.136689 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:10.635931 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:08.305329 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:10.805529 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:08.376612 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.769523227s)
	I0819 13:21:08.376661 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 13:21:08.376690 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:08.376749 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 13:21:10.337756 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.960979615s)
	I0819 13:21:10.337805 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 13:21:10.337835 2151869 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.337888 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 13:21:10.982386 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 13:21:10.982434 2151869 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.982483 2151869 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 13:21:10.345178 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.345444 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:11.135937 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:11.636273 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.136428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:12.635907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.135893 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.636189 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.136600 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:14.636483 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.135856 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.635957 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:13.303597 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:15.804612 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:12.334483 2151869 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351971561s)
	I0819 13:21:12.334524 2151869 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 13:21:12.334561 2151869 cache_images.go:123] Successfully loaded all cached images
	I0819 13:21:12.334568 2151869 cache_images.go:92] duration metric: took 14.989422621s to LoadCachedImages
	I0819 13:21:12.334580 2151869 kubeadm.go:934] updating node { 192.168.50.161 8443 v1.31.0 crio true true} ...
	I0819 13:21:12.334746 2151869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-441627 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:21:12.334860 2151869 ssh_runner.go:195] Run: crio config
	I0819 13:21:12.396639 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:12.396662 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:12.396674 2151869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:21:12.396703 2151869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.161 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-441627 NodeName:no-preload-441627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:21:12.396877 2151869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-441627"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:21:12.396958 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:21:12.408527 2151869 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:21:12.408601 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:21:12.418596 2151869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 13:21:12.435620 2151869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:21:12.452829 2151869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 13:21:12.470756 2151869 ssh_runner.go:195] Run: grep 192.168.50.161	control-plane.minikube.internal$ /etc/hosts
	I0819 13:21:12.474802 2151869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:21:12.486795 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:21:12.612626 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:21:12.639005 2151869 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627 for IP: 192.168.50.161
	I0819 13:21:12.639027 2151869 certs.go:194] generating shared ca certs ...
	I0819 13:21:12.639051 2151869 certs.go:226] acquiring lock for ca certs: {Name:mkb3c2422cbedf8a27f3584c3a3ecbed30049e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:21:12.639231 2151869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key
	I0819 13:21:12.639296 2151869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key
	I0819 13:21:12.639310 2151869 certs.go:256] generating profile certs ...
	I0819 13:21:12.639441 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/client.key
	I0819 13:21:12.639535 2151869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key.72b03406
	I0819 13:21:12.639591 2151869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key
	I0819 13:21:12.639752 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem (1338 bytes)
	W0819 13:21:12.639811 2151869 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193_empty.pem, impossibly tiny 0 bytes
	I0819 13:21:12.639822 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:21:12.639849 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/ca.pem (1078 bytes)
	I0819 13:21:12.639874 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:21:12.639899 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/key.pem (1675 bytes)
	I0819 13:21:12.639944 2151869 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem (1708 bytes)
	I0819 13:21:12.640741 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:21:12.674749 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:21:12.700466 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:21:12.726553 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 13:21:12.753171 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 13:21:12.779864 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:21:12.817715 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:21:12.841844 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/no-preload-441627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:21:12.867122 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:21:12.896767 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/certs/2094193.pem --> /usr/share/ca-certificates/2094193.pem (1338 bytes)
	I0819 13:21:12.921671 2151869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/ssl/certs/20941932.pem --> /usr/share/ca-certificates/20941932.pem (1708 bytes)
	I0819 13:21:12.948478 2151869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:21:12.967603 2151869 ssh_runner.go:195] Run: openssl version
	I0819 13:21:12.973848 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20941932.pem && ln -fs /usr/share/ca-certificates/20941932.pem /etc/ssl/certs/20941932.pem"
	I0819 13:21:12.984584 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989193 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 12:08 /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.989248 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20941932.pem
	I0819 13:21:12.995006 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20941932.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:21:13.005510 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:21:13.017919 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022858 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.022918 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:21:13.029040 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:21:13.039654 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2094193.pem && ln -fs /usr/share/ca-certificates/2094193.pem /etc/ssl/certs/2094193.pem"
	I0819 13:21:13.049968 2151869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054446 2151869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 12:08 /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.054502 2151869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2094193.pem
	I0819 13:21:13.060941 2151869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2094193.pem /etc/ssl/certs/51391683.0"
	I0819 13:21:13.071954 2151869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:21:13.076513 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:21:13.082624 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:21:13.088523 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:21:13.094597 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:21:13.100589 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:21:13.106674 2151869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:21:13.112637 2151869 kubeadm.go:392] StartCluster: {Name:no-preload-441627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-441627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:21:13.112759 2151869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 13:21:13.112804 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.153143 2151869 cri.go:89] found id: ""
	I0819 13:21:13.153230 2151869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:21:13.164365 2151869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:21:13.164389 2151869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:21:13.164444 2151869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:21:13.174609 2151869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:21:13.175874 2151869 kubeconfig.go:125] found "no-preload-441627" server: "https://192.168.50.161:8443"
	I0819 13:21:13.178228 2151869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:21:13.188495 2151869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.161
	I0819 13:21:13.188536 2151869 kubeadm.go:1160] stopping kube-system containers ...
	I0819 13:21:13.188562 2151869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 13:21:13.188623 2151869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:21:13.225547 2151869 cri.go:89] found id: ""
	I0819 13:21:13.225635 2151869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 13:21:13.242484 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:21:13.251983 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:21:13.252021 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:21:13.252085 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:21:13.260912 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:21:13.260984 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:21:13.270535 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:21:13.279500 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:21:13.279549 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:21:13.288906 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.297861 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:21:13.297914 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:21:13.307468 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:21:13.316801 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:21:13.316859 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:21:13.325756 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:21:13.336403 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:13.448374 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.613650 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.165234261s)
	I0819 13:21:14.613681 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.841371 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:14.904812 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:15.017425 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:21:15.017541 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:15.518613 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.018036 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.061238 2151869 api_server.go:72] duration metric: took 1.043828546s to wait for apiserver process to appear ...
	I0819 13:21:16.061271 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:21:16.061301 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:14.846330 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:17.344355 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:19.219563 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 13:21:19.219607 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 13:21:19.219627 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.350613 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.350647 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:19.562010 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:19.567205 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:19.567236 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.062225 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.085272 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 13:21:20.085311 2151869 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 13:21:20.561434 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:21:20.565983 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:21:20.574754 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:21:20.574779 2151869 api_server.go:131] duration metric: took 4.513500352s to wait for apiserver health ...
	I0819 13:21:20.574790 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:21:20.574797 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:21:20.576255 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:21:16.136030 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:16.636437 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.135718 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.635876 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.136668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:18.636021 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.135933 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:19.636638 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.135892 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:20.636450 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:17.804980 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.306777 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:20.577369 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:21:20.603169 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:21:20.642872 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:21:20.657328 2151869 system_pods.go:59] 8 kube-system pods found
	I0819 13:21:20.657366 2151869 system_pods.go:61] "coredns-6f6b679f8f-4w764" [486b4748-9a70-47b1-b386-e5369ebc2a96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 13:21:20.657373 2151869 system_pods.go:61] "etcd-no-preload-441627" [8013da47-1ecb-47de-a82c-7b72529b68fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:21:20.657383 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [72190b64-c312-48c0-8494-27e1ad39d877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:21:20.657389 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [60773a38-952f-4f53-a3f3-e54acdf7d196] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:21:20.657395 2151869 system_pods.go:61] "kube-proxy-v76kp" [ac6ad27b-5ee4-4899-8744-82537285a195] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 13:21:20.657400 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [78dde452-d8d5-498d-a6b1-459e6cb9ac16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:21:20.657405 2151869 system_pods.go:61] "metrics-server-6867b74b74-hd777" [2ce49c7d-af76-459a-82f5-aaa8d2b97c60] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:21:20.657410 2151869 system_pods.go:61] "storage-provisioner" [b67a7c83-46d3-4d75-bfa2-ae944b0c0ab8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 13:21:20.657417 2151869 system_pods.go:74] duration metric: took 14.518946ms to wait for pod list to return data ...
	I0819 13:21:20.657424 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:21:20.661337 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:21:20.661362 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:21:20.661373 2151869 node_conditions.go:105] duration metric: took 3.944656ms to run NodePressure ...
	I0819 13:21:20.661435 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 13:21:20.951411 2151869 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955734 2151869 kubeadm.go:739] kubelet initialised
	I0819 13:21:20.955755 2151869 kubeadm.go:740] duration metric: took 4.317107ms waiting for restarted kubelet to initialise ...
	I0819 13:21:20.955763 2151869 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:21:20.961038 2151869 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.965450 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965473 2151869 pod_ready.go:82] duration metric: took 4.406672ms for pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.965481 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "coredns-6f6b679f8f-4w764" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.965488 2151869 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.969586 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969610 2151869 pod_ready.go:82] duration metric: took 4.112423ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.969620 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "etcd-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.969626 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:20.973542 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973566 2151869 pod_ready.go:82] duration metric: took 3.932252ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:20.973575 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-apiserver-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:20.973583 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.046350 2151869 pod_ready.go:98] node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046390 2151869 pod_ready.go:82] duration metric: took 72.796575ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	E0819 13:21:21.046405 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-441627" hosting pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-441627" has status "Ready":"False"
	I0819 13:21:21.046414 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:19.346285 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.346824 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.136149 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:21.635766 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.136148 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:22.636029 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.135942 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:23.636570 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.135982 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:24.635967 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.136750 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:25.635977 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:25.636070 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:25.675163 2152780 cri.go:89] found id: ""
	I0819 13:21:25.675196 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.675208 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:25.675216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:25.675294 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:25.711676 2152780 cri.go:89] found id: ""
	I0819 13:21:25.711702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.711711 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:25.711717 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:25.711777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:25.748448 2152780 cri.go:89] found id: ""
	I0819 13:21:25.748479 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.748490 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:25.748499 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:25.748574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:25.786555 2152780 cri.go:89] found id: ""
	I0819 13:21:25.786584 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.786597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:25.786606 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:25.786661 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:22.804004 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:24.804708 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:21.447200 2151869 pod_ready.go:93] pod "kube-proxy-v76kp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:21.447230 2151869 pod_ready.go:82] duration metric: took 400.803851ms for pod "kube-proxy-v76kp" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:21.447241 2151869 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:23.452136 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.454487 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:23.845593 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:26.345971 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:25.819620 2152780 cri.go:89] found id: ""
	I0819 13:21:25.819654 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.819665 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:25.819672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:25.819734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:25.853303 2152780 cri.go:89] found id: ""
	I0819 13:21:25.853333 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.853342 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:25.853349 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:25.853401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:25.886285 2152780 cri.go:89] found id: ""
	I0819 13:21:25.886315 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.886325 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:25.886332 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:25.886389 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:25.921320 2152780 cri.go:89] found id: ""
	I0819 13:21:25.921347 2152780 logs.go:276] 0 containers: []
	W0819 13:21:25.921356 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:25.921367 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:25.921377 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:25.975400 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:25.975486 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:25.989715 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:25.989741 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:26.115378 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:26.115402 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:26.115415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:26.192370 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:26.192415 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:28.739640 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:28.752652 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:28.752729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:28.791883 2152780 cri.go:89] found id: ""
	I0819 13:21:28.791910 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.791918 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:28.791925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:28.791977 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:28.829635 2152780 cri.go:89] found id: ""
	I0819 13:21:28.829668 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.829678 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:28.829686 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:28.829756 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:28.866768 2152780 cri.go:89] found id: ""
	I0819 13:21:28.866804 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.866817 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:28.866825 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:28.866897 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:28.901921 2152780 cri.go:89] found id: ""
	I0819 13:21:28.901954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.901963 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:28.901970 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:28.902025 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:28.945421 2152780 cri.go:89] found id: ""
	I0819 13:21:28.945449 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.945458 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:28.945468 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:28.945535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:28.982857 2152780 cri.go:89] found id: ""
	I0819 13:21:28.982893 2152780 logs.go:276] 0 containers: []
	W0819 13:21:28.982905 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:28.982918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:28.982978 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:29.020378 2152780 cri.go:89] found id: ""
	I0819 13:21:29.020415 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.020427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:29.020436 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:29.020502 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:29.055782 2152780 cri.go:89] found id: ""
	I0819 13:21:29.055817 2152780 logs.go:276] 0 containers: []
	W0819 13:21:29.055829 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:29.055841 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:29.055864 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:29.107722 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:29.107761 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:29.122304 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:29.122364 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:29.193379 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:29.193415 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:29.193435 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:29.271539 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:29.271578 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:27.304409 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.306462 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:27.952426 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:29.954179 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:28.346305 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:30.845538 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:31.819829 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:31.832354 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:31.832427 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:31.873620 2152780 cri.go:89] found id: ""
	I0819 13:21:31.873647 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.873656 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:31.873662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:31.873722 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:31.905857 2152780 cri.go:89] found id: ""
	I0819 13:21:31.905891 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.905903 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:31.905911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:31.905993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:31.942187 2152780 cri.go:89] found id: ""
	I0819 13:21:31.942215 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.942227 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:31.942235 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:31.942307 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:31.981439 2152780 cri.go:89] found id: ""
	I0819 13:21:31.981475 2152780 logs.go:276] 0 containers: []
	W0819 13:21:31.981486 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:31.981494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:31.981569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:32.019863 2152780 cri.go:89] found id: ""
	I0819 13:21:32.019900 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.019922 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:32.019934 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:32.020000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:32.055007 2152780 cri.go:89] found id: ""
	I0819 13:21:32.055035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.055044 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:32.055050 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:32.055117 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:32.092770 2152780 cri.go:89] found id: ""
	I0819 13:21:32.092809 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.092818 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:32.092824 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:32.092880 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:32.127000 2152780 cri.go:89] found id: ""
	I0819 13:21:32.127035 2152780 logs.go:276] 0 containers: []
	W0819 13:21:32.127043 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:32.127053 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:32.127067 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:32.179979 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:32.180026 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:32.195377 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:32.195412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:32.269864 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:32.269887 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:32.269902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:32.348035 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:32.348072 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:34.886647 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:34.900410 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:34.900483 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:34.938558 2152780 cri.go:89] found id: ""
	I0819 13:21:34.938588 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.938596 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:34.938603 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:34.938660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:34.975968 2152780 cri.go:89] found id: ""
	I0819 13:21:34.976001 2152780 logs.go:276] 0 containers: []
	W0819 13:21:34.976012 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:34.976020 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:34.976105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:35.008967 2152780 cri.go:89] found id: ""
	I0819 13:21:35.008995 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.009003 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:35.009010 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:35.009079 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:35.045918 2152780 cri.go:89] found id: ""
	I0819 13:21:35.045957 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.045966 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:35.045973 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:35.046038 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:35.090605 2152780 cri.go:89] found id: ""
	I0819 13:21:35.090640 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.090652 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:35.090667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:35.090740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:35.152032 2152780 cri.go:89] found id: ""
	I0819 13:21:35.152066 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.152079 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:35.152088 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:35.152182 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:35.197900 2152780 cri.go:89] found id: ""
	I0819 13:21:35.197932 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.197947 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:35.197956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:35.198026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:35.233818 2152780 cri.go:89] found id: ""
	I0819 13:21:35.233853 2152780 logs.go:276] 0 containers: []
	W0819 13:21:35.233867 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:35.233884 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:35.233902 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:35.316457 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:35.316501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:35.356449 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:35.356480 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:35.409307 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:35.409347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:35.423776 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:35.423807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:35.500868 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:31.803587 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:33.803736 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.805282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:32.452623 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:34.454347 2151869 pod_ready.go:103] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.454772 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:21:35.454809 2151869 pod_ready.go:82] duration metric: took 14.007559429s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:35.454826 2151869 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	I0819 13:21:33.345607 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:35.347672 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.844185 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:38.001981 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:38.014908 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:38.014991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:38.051110 2152780 cri.go:89] found id: ""
	I0819 13:21:38.051141 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.051154 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:38.051162 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:38.051236 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:38.087476 2152780 cri.go:89] found id: ""
	I0819 13:21:38.087502 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.087511 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:38.087517 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:38.087575 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:38.122929 2152780 cri.go:89] found id: ""
	I0819 13:21:38.122961 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.122972 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:38.122980 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:38.123050 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:38.157352 2152780 cri.go:89] found id: ""
	I0819 13:21:38.157382 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.157394 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:38.157403 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:38.157468 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:38.194914 2152780 cri.go:89] found id: ""
	I0819 13:21:38.194953 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.194968 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:38.194974 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:38.195028 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:38.234036 2152780 cri.go:89] found id: ""
	I0819 13:21:38.234069 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.234078 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:38.234087 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:38.234155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:38.275759 2152780 cri.go:89] found id: ""
	I0819 13:21:38.275787 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.275798 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:38.275806 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:38.275867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:38.312718 2152780 cri.go:89] found id: ""
	I0819 13:21:38.312749 2152780 logs.go:276] 0 containers: []
	W0819 13:21:38.312759 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:38.312770 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:38.312785 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:38.352032 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:38.352057 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:38.401877 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:38.401912 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:38.415899 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:38.415929 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:38.495576 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:38.495596 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:38.495609 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:38.304885 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.804828 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:37.461711 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:39.960883 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:40.343811 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.344371 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:41.073796 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:41.095862 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:41.095935 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:41.155760 2152780 cri.go:89] found id: ""
	I0819 13:21:41.155790 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.155820 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:41.155828 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:41.155899 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:41.200108 2152780 cri.go:89] found id: ""
	I0819 13:21:41.200147 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.200177 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:41.200189 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:41.200263 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:41.233462 2152780 cri.go:89] found id: ""
	I0819 13:21:41.233494 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.233506 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:41.233514 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:41.233585 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:41.267089 2152780 cri.go:89] found id: ""
	I0819 13:21:41.267120 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.267133 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:41.267141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:41.267212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:41.302391 2152780 cri.go:89] found id: ""
	I0819 13:21:41.302423 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.302434 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:41.302443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:41.302509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:41.337812 2152780 cri.go:89] found id: ""
	I0819 13:21:41.337843 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.337855 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:41.337864 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:41.337939 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:41.373924 2152780 cri.go:89] found id: ""
	I0819 13:21:41.373954 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.373961 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:41.373968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:41.374023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:41.407905 2152780 cri.go:89] found id: ""
	I0819 13:21:41.407950 2152780 logs.go:276] 0 containers: []
	W0819 13:21:41.407961 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:41.407974 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:41.407996 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:41.493650 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:41.493693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:41.532411 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:41.532456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:41.584613 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:41.584650 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:41.598337 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:41.598369 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:41.672154 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.172662 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:44.185663 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:44.185730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:44.220825 2152780 cri.go:89] found id: ""
	I0819 13:21:44.220863 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.220876 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:44.220885 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:44.220955 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:44.257261 2152780 cri.go:89] found id: ""
	I0819 13:21:44.257292 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.257300 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:44.257307 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:44.257369 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:44.292652 2152780 cri.go:89] found id: ""
	I0819 13:21:44.292690 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.292701 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:44.292709 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:44.292806 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:44.330064 2152780 cri.go:89] found id: ""
	I0819 13:21:44.330096 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.330108 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:44.330117 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:44.330184 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:44.366863 2152780 cri.go:89] found id: ""
	I0819 13:21:44.366895 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.366906 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:44.366915 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:44.366976 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:44.400335 2152780 cri.go:89] found id: ""
	I0819 13:21:44.400372 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.400385 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:44.400394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:44.400458 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:44.434381 2152780 cri.go:89] found id: ""
	I0819 13:21:44.434411 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.434422 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:44.434430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:44.434498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:44.470985 2152780 cri.go:89] found id: ""
	I0819 13:21:44.471013 2152780 logs.go:276] 0 containers: []
	W0819 13:21:44.471024 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:44.471036 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:44.471048 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:44.521552 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:44.521593 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:44.536197 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:44.536230 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:44.613040 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:44.613063 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:44.613077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:44.694600 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:44.694655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:42.807847 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:45.304916 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:42.461974 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.464590 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:44.843926 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.845069 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:47.231770 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:47.245480 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:47.245555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:47.282302 2152780 cri.go:89] found id: ""
	I0819 13:21:47.282334 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.282343 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:47.282351 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:47.282424 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:47.317791 2152780 cri.go:89] found id: ""
	I0819 13:21:47.317828 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.317839 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:47.317847 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:47.317914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:47.355999 2152780 cri.go:89] found id: ""
	I0819 13:21:47.356033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.356044 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:47.356052 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:47.356113 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:47.394652 2152780 cri.go:89] found id: ""
	I0819 13:21:47.394689 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.394702 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:47.394711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:47.394769 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:47.428563 2152780 cri.go:89] found id: ""
	I0819 13:21:47.428599 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.428612 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:47.428621 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:47.428698 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:47.462669 2152780 cri.go:89] found id: ""
	I0819 13:21:47.462702 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.462712 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:47.462719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:47.462789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:47.496322 2152780 cri.go:89] found id: ""
	I0819 13:21:47.496349 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.496357 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:47.496366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:47.496419 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:47.527997 2152780 cri.go:89] found id: ""
	I0819 13:21:47.528033 2152780 logs.go:276] 0 containers: []
	W0819 13:21:47.528045 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:47.528058 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:47.528077 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:47.541514 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:47.541542 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:47.612804 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:47.612833 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:47.612851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.692815 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:47.692857 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:47.731057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:47.731092 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.286289 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:50.299376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:50.299457 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:50.335093 2152780 cri.go:89] found id: ""
	I0819 13:21:50.335123 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.335132 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:50.335138 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:50.335212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:50.369353 2152780 cri.go:89] found id: ""
	I0819 13:21:50.369378 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.369386 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:50.369394 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:50.369459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:50.404319 2152780 cri.go:89] found id: ""
	I0819 13:21:50.404350 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.404360 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:50.404366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:50.404421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:50.441033 2152780 cri.go:89] found id: ""
	I0819 13:21:50.441064 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.441073 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:50.441080 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:50.441136 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:50.477438 2152780 cri.go:89] found id: ""
	I0819 13:21:50.477463 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.477471 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:50.477485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:50.477541 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:50.511785 2152780 cri.go:89] found id: ""
	I0819 13:21:50.511825 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.511838 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:50.511848 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:50.511925 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:50.545454 2152780 cri.go:89] found id: ""
	I0819 13:21:50.545487 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.545496 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:50.545502 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:50.545563 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:50.582022 2152780 cri.go:89] found id: ""
	I0819 13:21:50.582053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:50.582061 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:50.582071 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:50.582086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:50.625353 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:50.625384 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:50.688814 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:50.688851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:50.704225 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:50.704254 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:50.782736 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:50.782771 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:50.782789 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:47.804084 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.804223 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:46.961189 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:48.962466 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:49.344808 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.347615 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.365075 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:53.380049 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:53.380135 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:53.418911 2152780 cri.go:89] found id: ""
	I0819 13:21:53.418944 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.418956 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:53.418964 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:53.419020 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:53.458952 2152780 cri.go:89] found id: ""
	I0819 13:21:53.458986 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.458997 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:53.459005 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:53.459080 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:53.497219 2152780 cri.go:89] found id: ""
	I0819 13:21:53.497254 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.497265 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:53.497273 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:53.497360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:53.533624 2152780 cri.go:89] found id: ""
	I0819 13:21:53.533658 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.533671 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:53.533682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:53.533761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:53.567453 2152780 cri.go:89] found id: ""
	I0819 13:21:53.567490 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.567501 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:53.567509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:53.567574 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:53.603438 2152780 cri.go:89] found id: ""
	I0819 13:21:53.603472 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.603482 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:53.603490 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:53.603570 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:53.637608 2152780 cri.go:89] found id: ""
	I0819 13:21:53.637648 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.637659 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:53.637667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:53.637742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:53.684476 2152780 cri.go:89] found id: ""
	I0819 13:21:53.684514 2152780 logs.go:276] 0 containers: []
	W0819 13:21:53.684524 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:53.684534 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:53.684550 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:53.699195 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:53.699227 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:53.766980 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:53.767000 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:53.767013 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:53.844890 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:53.844924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:53.888284 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:53.888318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:51.805011 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.805710 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:51.460867 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.462350 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.962100 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:53.845091 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:55.845602 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:57.848944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:56.442668 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:56.455936 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:56.456019 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:56.494363 2152780 cri.go:89] found id: ""
	I0819 13:21:56.494403 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.494414 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:56.494423 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:56.494498 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:56.531901 2152780 cri.go:89] found id: ""
	I0819 13:21:56.531933 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.531943 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:56.531950 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:56.532046 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:56.569677 2152780 cri.go:89] found id: ""
	I0819 13:21:56.569706 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.569715 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:56.569721 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:56.569787 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:56.608109 2152780 cri.go:89] found id: ""
	I0819 13:21:56.608136 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.608147 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:56.608153 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:56.608240 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:56.646753 2152780 cri.go:89] found id: ""
	I0819 13:21:56.646791 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.646803 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:56.646810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:56.646879 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:56.686029 2152780 cri.go:89] found id: ""
	I0819 13:21:56.686053 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.686061 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:56.686067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:56.686131 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:56.724838 2152780 cri.go:89] found id: ""
	I0819 13:21:56.724874 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.724886 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:56.724895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:56.724963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:56.762368 2152780 cri.go:89] found id: ""
	I0819 13:21:56.762399 2152780 logs.go:276] 0 containers: []
	W0819 13:21:56.762407 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:56.762422 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:56.762434 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:56.815047 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:56.815103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:56.830783 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:56.830809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:56.908963 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:56.908990 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:56.909008 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:21:56.991337 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:21:56.991372 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:59.532005 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:21:59.545733 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:21:59.545823 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:21:59.584082 2152780 cri.go:89] found id: ""
	I0819 13:21:59.584111 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.584121 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:21:59.584130 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:21:59.584216 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:21:59.618041 2152780 cri.go:89] found id: ""
	I0819 13:21:59.618075 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.618086 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:21:59.618094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:21:59.618171 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:21:59.651720 2152780 cri.go:89] found id: ""
	I0819 13:21:59.651753 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.651764 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:21:59.651772 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:21:59.651844 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:21:59.686447 2152780 cri.go:89] found id: ""
	I0819 13:21:59.686471 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.686479 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:21:59.686486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:21:59.686539 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:21:59.719583 2152780 cri.go:89] found id: ""
	I0819 13:21:59.719623 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.719635 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:21:59.719643 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:21:59.719714 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:21:59.753559 2152780 cri.go:89] found id: ""
	I0819 13:21:59.753593 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.753604 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:21:59.753613 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:21:59.753684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:21:59.789518 2152780 cri.go:89] found id: ""
	I0819 13:21:59.789554 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.789566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:21:59.789575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:21:59.789647 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:21:59.825772 2152780 cri.go:89] found id: ""
	I0819 13:21:59.825801 2152780 logs.go:276] 0 containers: []
	W0819 13:21:59.825812 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:21:59.825823 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:21:59.825837 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:21:59.878857 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:21:59.878899 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:21:59.894113 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:21:59.894146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:21:59.975274 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:21:59.975301 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:21:59.975318 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:00.056121 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:00.056185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:21:56.304106 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.804232 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.805308 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:21:58.461819 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.961828 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:00.345176 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.345756 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.596366 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:02.610365 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:02.610439 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:02.646703 2152780 cri.go:89] found id: ""
	I0819 13:22:02.646729 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.646737 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:02.646743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:02.646800 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:02.682311 2152780 cri.go:89] found id: ""
	I0819 13:22:02.682345 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.682357 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:02.682364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:02.682441 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:02.718494 2152780 cri.go:89] found id: ""
	I0819 13:22:02.718525 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.718545 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:02.718551 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:02.718608 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.751952 2152780 cri.go:89] found id: ""
	I0819 13:22:02.751988 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.751999 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:02.752008 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:02.752074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:02.786156 2152780 cri.go:89] found id: ""
	I0819 13:22:02.786189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.786197 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:02.786204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:02.786261 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:02.823080 2152780 cri.go:89] found id: ""
	I0819 13:22:02.823114 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.823126 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:02.823134 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:02.823207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:02.860266 2152780 cri.go:89] found id: ""
	I0819 13:22:02.860299 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.860311 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:02.860320 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:02.860377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:02.903612 2152780 cri.go:89] found id: ""
	I0819 13:22:02.903643 2152780 logs.go:276] 0 containers: []
	W0819 13:22:02.903652 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:02.903667 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:02.903680 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:02.984771 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:02.984813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:03.027346 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:03.027378 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:03.077562 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:03.077608 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:03.092135 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:03.092190 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:03.168418 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:05.669468 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:05.682640 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:05.682706 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:05.717755 2152780 cri.go:89] found id: ""
	I0819 13:22:05.717785 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.717796 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:05.717805 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:05.717867 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:05.757165 2152780 cri.go:89] found id: ""
	I0819 13:22:05.757191 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.757199 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:05.757204 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:05.757265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:05.794991 2152780 cri.go:89] found id: ""
	I0819 13:22:05.795023 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.795035 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:05.795044 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:05.795124 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:02.805919 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.304915 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:02.963297 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.459786 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:04.844594 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.344680 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:05.829634 2152780 cri.go:89] found id: ""
	I0819 13:22:05.829664 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.829673 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:05.829680 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:05.829734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:05.865739 2152780 cri.go:89] found id: ""
	I0819 13:22:05.865767 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.865775 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:05.865781 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:05.865839 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:05.901395 2152780 cri.go:89] found id: ""
	I0819 13:22:05.901425 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.901435 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:05.901443 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:05.901641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:05.937162 2152780 cri.go:89] found id: ""
	I0819 13:22:05.937190 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.937200 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:05.937207 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:05.937284 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:05.974302 2152780 cri.go:89] found id: ""
	I0819 13:22:05.974332 2152780 logs.go:276] 0 containers: []
	W0819 13:22:05.974342 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:05.974353 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:05.974367 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:06.012635 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:06.012664 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:06.064093 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:06.064129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:06.077792 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:06.077825 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:06.148954 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:06.148980 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:06.148999 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:08.731539 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:08.747311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:08.747414 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:08.782881 2152780 cri.go:89] found id: ""
	I0819 13:22:08.782913 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.782921 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:08.782927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:08.782990 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:08.817533 2152780 cri.go:89] found id: ""
	I0819 13:22:08.817569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.817580 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:08.817587 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:08.817657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:08.855614 2152780 cri.go:89] found id: ""
	I0819 13:22:08.855650 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.855664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:08.855672 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:08.855742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:08.896405 2152780 cri.go:89] found id: ""
	I0819 13:22:08.896439 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.896447 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:08.896455 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:08.896525 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:08.936250 2152780 cri.go:89] found id: ""
	I0819 13:22:08.936279 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.936288 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:08.936296 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:08.936366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:08.976325 2152780 cri.go:89] found id: ""
	I0819 13:22:08.976357 2152780 logs.go:276] 0 containers: []
	W0819 13:22:08.976367 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:08.976376 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:08.976446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:09.012921 2152780 cri.go:89] found id: ""
	I0819 13:22:09.012953 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.012962 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:09.012967 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:09.013026 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:09.051385 2152780 cri.go:89] found id: ""
	I0819 13:22:09.051420 2152780 logs.go:276] 0 containers: []
	W0819 13:22:09.051432 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:09.051444 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:09.051461 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:09.105101 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:09.105146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:09.120289 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:09.120331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:09.216182 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:09.216220 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:09.216238 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:09.302832 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:09.302876 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:07.307100 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.803522 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:07.461843 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.961972 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:09.345855 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.845869 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.848734 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:11.861754 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:11.861838 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:11.897908 2152780 cri.go:89] found id: ""
	I0819 13:22:11.897940 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.897949 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:11.897956 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:11.898013 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:11.935276 2152780 cri.go:89] found id: ""
	I0819 13:22:11.935309 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.935318 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:11.935326 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:11.935386 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:11.978577 2152780 cri.go:89] found id: ""
	I0819 13:22:11.978608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:11.978619 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:11.978626 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:11.978697 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:12.018464 2152780 cri.go:89] found id: ""
	I0819 13:22:12.018496 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.018505 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:12.018511 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:12.018567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:12.062371 2152780 cri.go:89] found id: ""
	I0819 13:22:12.062402 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.062412 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:12.062418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:12.062476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:12.102988 2152780 cri.go:89] found id: ""
	I0819 13:22:12.103017 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.103026 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:12.103033 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:12.103088 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:12.143553 2152780 cri.go:89] found id: ""
	I0819 13:22:12.143581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.143591 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:12.143599 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:12.143660 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:12.187155 2152780 cri.go:89] found id: ""
	I0819 13:22:12.187192 2152780 logs.go:276] 0 containers: []
	W0819 13:22:12.187211 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:12.187224 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:12.187251 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:12.236028 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:12.236069 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:12.250047 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:12.250084 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:12.320586 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:12.320612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:12.320624 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:12.400274 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:12.400320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:14.939735 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:14.953283 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:14.953348 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:14.993818 2152780 cri.go:89] found id: ""
	I0819 13:22:14.993851 2152780 logs.go:276] 0 containers: []
	W0819 13:22:14.993862 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:14.993870 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:14.993940 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:15.035966 2152780 cri.go:89] found id: ""
	I0819 13:22:15.035997 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.036005 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:15.036012 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:15.036073 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:15.070730 2152780 cri.go:89] found id: ""
	I0819 13:22:15.070765 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.070777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:15.070785 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:15.070855 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:15.111830 2152780 cri.go:89] found id: ""
	I0819 13:22:15.111859 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.111868 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:15.111874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:15.111928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:15.144013 2152780 cri.go:89] found id: ""
	I0819 13:22:15.144039 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.144047 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:15.144053 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:15.144107 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:15.178050 2152780 cri.go:89] found id: ""
	I0819 13:22:15.178074 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.178083 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:15.178090 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:15.178145 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:15.212622 2152780 cri.go:89] found id: ""
	I0819 13:22:15.212647 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.212656 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:15.212662 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:15.212716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:15.244744 2152780 cri.go:89] found id: ""
	I0819 13:22:15.244772 2152780 logs.go:276] 0 containers: []
	W0819 13:22:15.244780 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:15.244789 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:15.244804 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:15.284232 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:15.284266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:15.336738 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:15.336774 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:15.350801 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:15.350835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:15.426424 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:15.426453 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:15.426469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:11.809160 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.303978 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:11.962746 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:13.964030 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:14.344847 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.344951 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.009087 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:18.022428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:18.022500 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:18.062900 2152780 cri.go:89] found id: ""
	I0819 13:22:18.062930 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.062941 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:18.062949 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:18.063021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:18.096711 2152780 cri.go:89] found id: ""
	I0819 13:22:18.096752 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.096764 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:18.096773 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:18.096846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:18.129199 2152780 cri.go:89] found id: ""
	I0819 13:22:18.129226 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.129246 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:18.129253 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:18.129323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:18.161907 2152780 cri.go:89] found id: ""
	I0819 13:22:18.161944 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.161955 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:18.161963 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:18.162036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:18.198514 2152780 cri.go:89] found id: ""
	I0819 13:22:18.198557 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.198570 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:18.198577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:18.198648 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:18.230114 2152780 cri.go:89] found id: ""
	I0819 13:22:18.230142 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.230152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:18.230160 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:18.230221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:18.265610 2152780 cri.go:89] found id: ""
	I0819 13:22:18.265642 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.265654 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:18.265660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:18.265729 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:18.298889 2152780 cri.go:89] found id: ""
	I0819 13:22:18.298929 2152780 logs.go:276] 0 containers: []
	W0819 13:22:18.298941 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:18.298952 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:18.298967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:18.381124 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:18.381165 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:18.422361 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:18.422394 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:18.482182 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:18.482215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:18.496054 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:18.496086 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:18.568459 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:16.305471 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.804592 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:16.461760 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.961000 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:20.961568 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:18.844928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.345851 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:21.069389 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:21.082568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:21.082636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:21.117124 2152780 cri.go:89] found id: ""
	I0819 13:22:21.117158 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.117171 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:21.117180 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:21.117272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:21.151954 2152780 cri.go:89] found id: ""
	I0819 13:22:21.151981 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.151990 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:21.151996 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:21.152047 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:21.187411 2152780 cri.go:89] found id: ""
	I0819 13:22:21.187443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.187452 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:21.187461 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:21.187528 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:21.224118 2152780 cri.go:89] found id: ""
	I0819 13:22:21.224148 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.224156 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:21.224173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:21.224247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:21.261153 2152780 cri.go:89] found id: ""
	I0819 13:22:21.261189 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.261198 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:21.261205 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:21.261274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:21.305404 2152780 cri.go:89] found id: ""
	I0819 13:22:21.305433 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.305444 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:21.305453 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:21.305534 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:21.358428 2152780 cri.go:89] found id: ""
	I0819 13:22:21.358462 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.358474 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:21.358482 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:21.358561 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:21.410971 2152780 cri.go:89] found id: ""
	I0819 13:22:21.411005 2152780 logs.go:276] 0 containers: []
	W0819 13:22:21.411017 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:21.411029 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:21.411045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:21.462929 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:21.462963 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:21.476419 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:21.476446 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:21.544095 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.544116 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:21.544130 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:21.622802 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:21.622841 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.171988 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:24.185545 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:24.185624 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:24.221604 2152780 cri.go:89] found id: ""
	I0819 13:22:24.221632 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.221640 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:24.221646 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:24.221703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:24.255866 2152780 cri.go:89] found id: ""
	I0819 13:22:24.255903 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.255914 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:24.255923 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:24.255993 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:24.290338 2152780 cri.go:89] found id: ""
	I0819 13:22:24.290381 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.290393 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:24.290401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:24.290465 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:24.325653 2152780 cri.go:89] found id: ""
	I0819 13:22:24.325685 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.325693 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:24.325700 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:24.325753 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:24.364081 2152780 cri.go:89] found id: ""
	I0819 13:22:24.364116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.364128 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:24.364136 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:24.364211 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:24.403464 2152780 cri.go:89] found id: ""
	I0819 13:22:24.403504 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.403516 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:24.403524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:24.403597 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:24.441205 2152780 cri.go:89] found id: ""
	I0819 13:22:24.441244 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.441256 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:24.441264 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:24.441340 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:24.478294 2152780 cri.go:89] found id: ""
	I0819 13:22:24.478324 2152780 logs.go:276] 0 containers: []
	W0819 13:22:24.478332 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:24.478341 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:24.478354 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:24.562969 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:24.563012 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:24.609296 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:24.609332 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:24.665537 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:24.665579 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:24.680086 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:24.680128 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:24.743465 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:21.305466 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.804623 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.461141 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.461784 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:23.844225 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:25.844485 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.844552 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.244137 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:27.257619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:27.257695 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:27.290229 2152780 cri.go:89] found id: ""
	I0819 13:22:27.290265 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.290274 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:27.290280 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:27.290354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:27.327185 2152780 cri.go:89] found id: ""
	I0819 13:22:27.327216 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.327225 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:27.327231 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:27.327302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:27.362949 2152780 cri.go:89] found id: ""
	I0819 13:22:27.362978 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.362999 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:27.363007 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:27.363074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:27.404085 2152780 cri.go:89] found id: ""
	I0819 13:22:27.404116 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.404125 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:27.404132 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:27.404237 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:27.439884 2152780 cri.go:89] found id: ""
	I0819 13:22:27.439922 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.439933 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:27.439942 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:27.440006 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:27.473933 2152780 cri.go:89] found id: ""
	I0819 13:22:27.473961 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.473970 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:27.473976 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:27.474033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:27.510093 2152780 cri.go:89] found id: ""
	I0819 13:22:27.510121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.510131 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:27.510137 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:27.510188 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:27.549178 2152780 cri.go:89] found id: ""
	I0819 13:22:27.549205 2152780 logs.go:276] 0 containers: []
	W0819 13:22:27.549213 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:27.549223 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:27.549236 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:27.605886 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:27.605925 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:27.619506 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:27.619537 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:27.686821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:27.686849 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:27.686867 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:27.761275 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:27.761315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:30.301382 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:30.315140 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:30.315221 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:30.352687 2152780 cri.go:89] found id: ""
	I0819 13:22:30.352710 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.352717 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:30.352723 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:30.352775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:30.388577 2152780 cri.go:89] found id: ""
	I0819 13:22:30.388604 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.388612 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:30.388618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:30.388671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:30.423937 2152780 cri.go:89] found id: ""
	I0819 13:22:30.423967 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.423977 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:30.423983 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:30.424039 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:30.456789 2152780 cri.go:89] found id: ""
	I0819 13:22:30.456823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.456834 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:30.456852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:30.456928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:30.495741 2152780 cri.go:89] found id: ""
	I0819 13:22:30.495778 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.495792 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:30.495801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:30.495856 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:30.532919 2152780 cri.go:89] found id: ""
	I0819 13:22:30.532947 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.532959 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:30.532972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:30.533036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:30.574556 2152780 cri.go:89] found id: ""
	I0819 13:22:30.574591 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.574602 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:30.574615 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:30.574685 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:30.610248 2152780 cri.go:89] found id: ""
	I0819 13:22:30.610280 2152780 logs.go:276] 0 containers: []
	W0819 13:22:30.610291 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:30.610303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:30.610319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:30.664302 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:30.664356 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:30.678529 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:30.678571 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:30.753412 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:30.753441 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:30.753460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:26.304403 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:28.305270 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.804282 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:27.462817 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:29.463222 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.344268 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:32.344384 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:30.831754 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:30.831795 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:33.375305 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:33.390064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:33.390139 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:33.426250 2152780 cri.go:89] found id: ""
	I0819 13:22:33.426281 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.426292 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:33.426301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:33.426356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:33.464229 2152780 cri.go:89] found id: ""
	I0819 13:22:33.464255 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.464263 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:33.464270 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:33.464329 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:33.501098 2152780 cri.go:89] found id: ""
	I0819 13:22:33.501134 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.501145 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:33.501154 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:33.501223 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:33.535617 2152780 cri.go:89] found id: ""
	I0819 13:22:33.535651 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.535662 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:33.535670 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:33.535738 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:33.570553 2152780 cri.go:89] found id: ""
	I0819 13:22:33.570600 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.570613 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:33.570622 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:33.570694 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:33.605414 2152780 cri.go:89] found id: ""
	I0819 13:22:33.605454 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.605466 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:33.605477 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:33.605545 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:33.637564 2152780 cri.go:89] found id: ""
	I0819 13:22:33.637599 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.637610 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:33.637618 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:33.637681 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:33.671403 2152780 cri.go:89] found id: ""
	I0819 13:22:33.671438 2152780 logs.go:276] 0 containers: []
	W0819 13:22:33.671450 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:33.671463 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:33.671482 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:33.724121 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:33.724159 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:33.737041 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:33.737070 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:33.808305 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:33.808329 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:33.808342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:33.886089 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:33.886136 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:32.806000 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:35.303503 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:31.463294 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:33.962798 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:34.344950 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.845547 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.424035 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:36.439329 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:36.439407 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:36.486348 2152780 cri.go:89] found id: ""
	I0819 13:22:36.486375 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.486383 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:36.486391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:36.486455 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:36.522796 2152780 cri.go:89] found id: ""
	I0819 13:22:36.522823 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.522834 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:36.522843 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:36.522909 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:36.556528 2152780 cri.go:89] found id: ""
	I0819 13:22:36.556561 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.556573 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:36.556580 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:36.556652 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:36.592545 2152780 cri.go:89] found id: ""
	I0819 13:22:36.592581 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.592591 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:36.592600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:36.592671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:36.628668 2152780 cri.go:89] found id: ""
	I0819 13:22:36.628696 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.628705 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:36.628711 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:36.628765 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:36.660783 2152780 cri.go:89] found id: ""
	I0819 13:22:36.660817 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.660825 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:36.660832 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:36.660886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:36.694853 2152780 cri.go:89] found id: ""
	I0819 13:22:36.694883 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.694893 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:36.694901 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:36.694969 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:36.727767 2152780 cri.go:89] found id: ""
	I0819 13:22:36.727797 2152780 logs.go:276] 0 containers: []
	W0819 13:22:36.727805 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:36.727816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:36.727830 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:36.741032 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:36.741064 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:36.816015 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:36.816045 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:36.816062 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:36.896272 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:36.896319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:36.933895 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:36.933936 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:39.495808 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:39.511071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:39.511158 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:39.551412 2152780 cri.go:89] found id: ""
	I0819 13:22:39.551447 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.551458 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:39.551467 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:39.551523 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:39.588796 2152780 cri.go:89] found id: ""
	I0819 13:22:39.588827 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.588836 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:39.588842 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:39.588904 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:39.627115 2152780 cri.go:89] found id: ""
	I0819 13:22:39.627144 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.627153 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:39.627159 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:39.627222 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:39.662372 2152780 cri.go:89] found id: ""
	I0819 13:22:39.662405 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.662413 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:39.662420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:39.662478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:39.696239 2152780 cri.go:89] found id: ""
	I0819 13:22:39.696272 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.696282 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:39.696298 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:39.696363 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:39.731334 2152780 cri.go:89] found id: ""
	I0819 13:22:39.731372 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.731383 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:39.731391 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:39.731454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:39.764368 2152780 cri.go:89] found id: ""
	I0819 13:22:39.764400 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.764412 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:39.764420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:39.764488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:39.798994 2152780 cri.go:89] found id: ""
	I0819 13:22:39.799026 2152780 logs.go:276] 0 containers: []
	W0819 13:22:39.799038 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:39.799050 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:39.799066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:39.812935 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:39.812965 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:39.882867 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:39.882898 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:39.882913 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:39.968067 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:39.968102 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:40.007122 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:40.007158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:37.304584 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.305417 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:36.462377 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:38.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:39.344771 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.355221 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:42.560811 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:42.574968 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:42.575034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:42.609476 2152780 cri.go:89] found id: ""
	I0819 13:22:42.609516 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.609529 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:42.609537 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:42.609607 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:42.650763 2152780 cri.go:89] found id: ""
	I0819 13:22:42.650794 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.650804 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:42.650812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:42.650885 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:42.686330 2152780 cri.go:89] found id: ""
	I0819 13:22:42.686358 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.686369 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:42.686377 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:42.686446 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:42.720937 2152780 cri.go:89] found id: ""
	I0819 13:22:42.720963 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.720971 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:42.720982 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:42.721036 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:42.755079 2152780 cri.go:89] found id: ""
	I0819 13:22:42.755122 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.755132 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:42.755141 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:42.755212 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:42.787680 2152780 cri.go:89] found id: ""
	I0819 13:22:42.787715 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.787727 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:42.787738 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:42.787807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:42.824768 2152780 cri.go:89] found id: ""
	I0819 13:22:42.824799 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.824807 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:42.824813 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:42.824874 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:42.859902 2152780 cri.go:89] found id: ""
	I0819 13:22:42.859935 2152780 logs.go:276] 0 containers: []
	W0819 13:22:42.859946 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:42.859957 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:42.859972 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:42.911169 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:42.911215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:42.925108 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:42.925140 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:43.002506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:43.002557 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:43.002574 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:43.084408 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:43.084451 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:45.625360 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:45.638948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:45.639033 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:45.675838 2152780 cri.go:89] found id: ""
	I0819 13:22:45.675874 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.675887 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:45.675895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:45.675971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:45.708726 2152780 cri.go:89] found id: ""
	I0819 13:22:45.708757 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.708768 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:45.708776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:45.708842 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:45.749704 2152780 cri.go:89] found id: ""
	I0819 13:22:45.749739 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.749752 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:45.749769 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:45.749835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:45.786426 2152780 cri.go:89] found id: ""
	I0819 13:22:45.786457 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.786469 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:45.786478 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:45.786547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:41.803829 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.804465 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.805120 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:41.462336 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.961582 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:43.845489 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.845943 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:45.822363 2152780 cri.go:89] found id: ""
	I0819 13:22:45.822395 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.822403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:45.822408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:45.822472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:45.856308 2152780 cri.go:89] found id: ""
	I0819 13:22:45.856337 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.856346 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:45.856353 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:45.856421 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:45.893379 2152780 cri.go:89] found id: ""
	I0819 13:22:45.893417 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.893427 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:45.893435 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:45.893503 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:45.929841 2152780 cri.go:89] found id: ""
	I0819 13:22:45.929868 2152780 logs.go:276] 0 containers: []
	W0819 13:22:45.929879 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:45.929890 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:45.929943 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:45.980392 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:45.980432 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:45.994781 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:45.994809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:46.065633 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:46.065658 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:46.065675 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:46.140938 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:46.140976 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.680055 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:48.692972 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:48.693053 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:48.736198 2152780 cri.go:89] found id: ""
	I0819 13:22:48.736232 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.736244 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:48.736252 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:48.736332 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:48.775819 2152780 cri.go:89] found id: ""
	I0819 13:22:48.775855 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.775872 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:48.775881 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:48.775956 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:48.814090 2152780 cri.go:89] found id: ""
	I0819 13:22:48.814121 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.814130 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:48.814139 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:48.814207 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:48.849630 2152780 cri.go:89] found id: ""
	I0819 13:22:48.849656 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.849665 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:48.849671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:48.849730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:48.887945 2152780 cri.go:89] found id: ""
	I0819 13:22:48.887982 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.887995 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:48.888003 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:48.888072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:48.924105 2152780 cri.go:89] found id: ""
	I0819 13:22:48.924145 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.924158 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:48.924184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:48.924266 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:48.960481 2152780 cri.go:89] found id: ""
	I0819 13:22:48.960521 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.960532 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:48.960540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:48.960611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:48.998826 2152780 cri.go:89] found id: ""
	I0819 13:22:48.998856 2152780 logs.go:276] 0 containers: []
	W0819 13:22:48.998865 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:48.998874 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:48.998888 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:49.053064 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:49.053103 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:49.067115 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:49.067154 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:49.146731 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:49.146753 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:49.146767 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:49.232228 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:49.232268 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:48.303393 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.305013 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:46.461905 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.961570 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.961743 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:48.344068 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:50.844551 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:51.775692 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:51.789766 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:51.789848 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:51.830997 2152780 cri.go:89] found id: ""
	I0819 13:22:51.831029 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.831038 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:51.831045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:51.831105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:51.870122 2152780 cri.go:89] found id: ""
	I0819 13:22:51.870150 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.870160 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:51.870168 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:51.870247 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:51.911170 2152780 cri.go:89] found id: ""
	I0819 13:22:51.911198 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.911206 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:51.911216 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:51.911280 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:51.946159 2152780 cri.go:89] found id: ""
	I0819 13:22:51.946197 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.946218 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:51.946225 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:51.946300 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:51.981131 2152780 cri.go:89] found id: ""
	I0819 13:22:51.981160 2152780 logs.go:276] 0 containers: []
	W0819 13:22:51.981170 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:51.981176 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:51.981251 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:52.024417 2152780 cri.go:89] found id: ""
	I0819 13:22:52.024456 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.024477 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:52.024486 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:52.024567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:52.062519 2152780 cri.go:89] found id: ""
	I0819 13:22:52.062552 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.062563 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:52.062571 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:52.062640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:52.106259 2152780 cri.go:89] found id: ""
	I0819 13:22:52.106291 2152780 logs.go:276] 0 containers: []
	W0819 13:22:52.106302 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:52.106314 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:52.106338 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:52.158681 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:52.158723 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:52.172489 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:52.172523 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:52.236862 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:52.236892 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:52.236910 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:52.319287 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:52.319331 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:54.864061 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:54.878753 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:54.878834 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:54.913195 2152780 cri.go:89] found id: ""
	I0819 13:22:54.913225 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.913240 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:54.913247 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:54.913314 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:54.954867 2152780 cri.go:89] found id: ""
	I0819 13:22:54.954909 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.954922 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:54.954930 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:54.954995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:54.992570 2152780 cri.go:89] found id: ""
	I0819 13:22:54.992601 2152780 logs.go:276] 0 containers: []
	W0819 13:22:54.992610 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:54.992616 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:54.992683 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:55.027056 2152780 cri.go:89] found id: ""
	I0819 13:22:55.027090 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.027101 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:55.027108 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:55.027190 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:55.061941 2152780 cri.go:89] found id: ""
	I0819 13:22:55.061973 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.061985 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:55.061994 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:55.062061 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:55.100414 2152780 cri.go:89] found id: ""
	I0819 13:22:55.100443 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.100455 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:55.100465 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:55.100535 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:55.143797 2152780 cri.go:89] found id: ""
	I0819 13:22:55.143836 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.143848 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:55.143856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:55.143929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:55.187540 2152780 cri.go:89] found id: ""
	I0819 13:22:55.187569 2152780 logs.go:276] 0 containers: []
	W0819 13:22:55.187577 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:55.187587 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:55.187600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:55.241075 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:55.241110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:55.254813 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:55.254846 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:55.325934 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:55.325955 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:55.325974 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:55.404725 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:55.404775 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:52.804423 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.303884 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:52.963783 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.461517 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:53.345242 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:55.346054 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.843535 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.949501 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:22:57.964664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:22:57.964730 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:22:58.005114 2152780 cri.go:89] found id: ""
	I0819 13:22:58.005153 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.005164 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:22:58.005173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:22:58.005274 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:22:58.038596 2152780 cri.go:89] found id: ""
	I0819 13:22:58.038629 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.038639 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:22:58.038644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:22:58.038700 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:22:58.081597 2152780 cri.go:89] found id: ""
	I0819 13:22:58.081631 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.081640 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:22:58.081650 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:22:58.081735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:22:58.120356 2152780 cri.go:89] found id: ""
	I0819 13:22:58.120389 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.120400 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:22:58.120408 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:22:58.120478 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:22:58.156313 2152780 cri.go:89] found id: ""
	I0819 13:22:58.156346 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.156358 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:22:58.156366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:22:58.156442 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:22:58.192198 2152780 cri.go:89] found id: ""
	I0819 13:22:58.192238 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.192249 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:22:58.192257 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:22:58.192335 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:22:58.226574 2152780 cri.go:89] found id: ""
	I0819 13:22:58.226608 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.226622 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:22:58.226631 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:22:58.226705 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:22:58.260060 2152780 cri.go:89] found id: ""
	I0819 13:22:58.260088 2152780 logs.go:276] 0 containers: []
	W0819 13:22:58.260096 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:22:58.260106 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:22:58.260119 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:22:58.338242 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:22:58.338289 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:22:58.382245 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:22:58.382286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:22:58.436273 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:22:58.436314 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:22:58.452143 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:22:58.452215 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:22:58.530133 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:22:57.305258 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.804142 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:57.462332 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.962480 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:22:59.844922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:01.030732 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:01.044083 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:01.044155 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:01.083082 2152780 cri.go:89] found id: ""
	I0819 13:23:01.083114 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.083122 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:01.083129 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:01.083197 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:01.122645 2152780 cri.go:89] found id: ""
	I0819 13:23:01.122680 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.122700 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:01.122708 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:01.122777 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:01.163144 2152780 cri.go:89] found id: ""
	I0819 13:23:01.163175 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.163187 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:01.163194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:01.163262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:01.201545 2152780 cri.go:89] found id: ""
	I0819 13:23:01.201587 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.201597 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:01.201605 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:01.201693 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:01.237047 2152780 cri.go:89] found id: ""
	I0819 13:23:01.237075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.237086 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:01.237094 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:01.237162 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:01.276531 2152780 cri.go:89] found id: ""
	I0819 13:23:01.276560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.276571 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:01.276577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:01.276633 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:01.315639 2152780 cri.go:89] found id: ""
	I0819 13:23:01.315668 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.315676 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:01.315682 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:01.315734 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:01.351942 2152780 cri.go:89] found id: ""
	I0819 13:23:01.351969 2152780 logs.go:276] 0 containers: []
	W0819 13:23:01.351980 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:01.351990 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:01.352006 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:01.403068 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:01.403111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:01.416608 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:01.416639 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:01.493450 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:01.493483 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:01.493501 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:01.577709 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:01.577750 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:04.146865 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:04.160366 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:04.160431 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:04.194463 2152780 cri.go:89] found id: ""
	I0819 13:23:04.194494 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.194504 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:04.194510 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:04.194567 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:04.228327 2152780 cri.go:89] found id: ""
	I0819 13:23:04.228351 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.228359 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:04.228364 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:04.228413 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:04.261613 2152780 cri.go:89] found id: ""
	I0819 13:23:04.261645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.261656 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:04.261664 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:04.261740 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:04.294583 2152780 cri.go:89] found id: ""
	I0819 13:23:04.294617 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.294628 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:04.294636 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:04.294712 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:04.328537 2152780 cri.go:89] found id: ""
	I0819 13:23:04.328566 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.328576 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:04.328585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:04.328657 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:04.361865 2152780 cri.go:89] found id: ""
	I0819 13:23:04.361906 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.361917 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:04.361925 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:04.361988 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:04.396308 2152780 cri.go:89] found id: ""
	I0819 13:23:04.396343 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.396354 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:04.396361 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:04.396433 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:04.429551 2152780 cri.go:89] found id: ""
	I0819 13:23:04.429580 2152780 logs.go:276] 0 containers: []
	W0819 13:23:04.429591 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:04.429602 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:04.429618 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:04.479557 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:04.479600 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:04.493746 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:04.493781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:04.563309 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:04.563333 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:04.563347 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:04.644413 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:04.644455 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:02.305047 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.804564 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:02.460986 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.461977 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:04.845028 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.345778 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:07.195678 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:07.209255 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:07.209323 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:07.245267 2152780 cri.go:89] found id: ""
	I0819 13:23:07.245295 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.245306 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:07.245324 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:07.245394 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:07.283190 2152780 cri.go:89] found id: ""
	I0819 13:23:07.283223 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.283234 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:07.283242 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:07.283312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:07.330025 2152780 cri.go:89] found id: ""
	I0819 13:23:07.330053 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.330065 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:07.330072 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:07.330137 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:07.381649 2152780 cri.go:89] found id: ""
	I0819 13:23:07.381675 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.381684 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:07.381690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:07.381744 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:07.436759 2152780 cri.go:89] found id: ""
	I0819 13:23:07.436794 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.436806 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:07.436814 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:07.436890 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:07.475843 2152780 cri.go:89] found id: ""
	I0819 13:23:07.475876 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.475885 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:07.475891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:07.475954 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:07.510531 2152780 cri.go:89] found id: ""
	I0819 13:23:07.510560 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.510569 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:07.510575 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:07.510641 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:07.546041 2152780 cri.go:89] found id: ""
	I0819 13:23:07.546075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:07.546087 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:07.546100 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:07.546118 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:07.559102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:07.559137 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:07.631002 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:07.631030 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:07.631045 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:07.705711 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:07.705751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:07.745114 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:07.745148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.302515 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:10.316028 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:10.316108 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:10.357431 2152780 cri.go:89] found id: ""
	I0819 13:23:10.357468 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.357477 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:10.357483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:10.357546 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:10.393541 2152780 cri.go:89] found id: ""
	I0819 13:23:10.393568 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.393577 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:10.393583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:10.393651 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:10.429651 2152780 cri.go:89] found id: ""
	I0819 13:23:10.429684 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.429692 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:10.429698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:10.429752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:10.469626 2152780 cri.go:89] found id: ""
	I0819 13:23:10.469690 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.469701 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:10.469706 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:10.469761 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:10.503015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.503048 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.503058 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:10.503065 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:10.503133 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:10.538127 2152780 cri.go:89] found id: ""
	I0819 13:23:10.538168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.538180 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:10.538188 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:10.538260 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:10.573015 2152780 cri.go:89] found id: ""
	I0819 13:23:10.573050 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.573058 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:10.573064 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:10.573134 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:10.608094 2152780 cri.go:89] found id: ""
	I0819 13:23:10.608125 2152780 logs.go:276] 0 containers: []
	W0819 13:23:10.608136 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:10.608150 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:10.608185 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:10.688926 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:10.688967 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:10.724554 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:10.724588 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:10.777619 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:10.777652 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:10.791106 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:10.791143 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:23:07.305198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.804759 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:06.961224 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.460095 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:09.843726 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.845072 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	W0819 13:23:10.865964 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.366960 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:13.381903 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:13.381973 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:13.420589 2152780 cri.go:89] found id: ""
	I0819 13:23:13.420620 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.420630 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:13.420637 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:13.420724 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:13.457809 2152780 cri.go:89] found id: ""
	I0819 13:23:13.457840 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.457850 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:13.457863 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:13.457927 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:13.496310 2152780 cri.go:89] found id: ""
	I0819 13:23:13.496347 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.496359 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:13.496368 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:13.496436 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:13.532057 2152780 cri.go:89] found id: ""
	I0819 13:23:13.532089 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.532097 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:13.532103 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:13.532183 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:13.575780 2152780 cri.go:89] found id: ""
	I0819 13:23:13.575814 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.575823 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:13.575829 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:13.575896 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:13.611484 2152780 cri.go:89] found id: ""
	I0819 13:23:13.611510 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.611520 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:13.611528 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:13.611594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:13.647456 2152780 cri.go:89] found id: ""
	I0819 13:23:13.647483 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.647491 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:13.647497 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:13.647569 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:13.681181 2152780 cri.go:89] found id: ""
	I0819 13:23:13.681212 2152780 logs.go:276] 0 containers: []
	W0819 13:23:13.681224 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:13.681237 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:13.681266 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:13.694680 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:13.694709 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:13.767546 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:13.767576 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:13.767597 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:13.851609 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:13.851649 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:13.893739 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:13.893778 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:12.304391 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.304840 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:11.962978 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.461299 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:14.344830 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.844720 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.446686 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:16.459850 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:16.459928 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:16.498671 2152780 cri.go:89] found id: ""
	I0819 13:23:16.498703 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.498712 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:16.498718 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:16.498775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:16.534655 2152780 cri.go:89] found id: ""
	I0819 13:23:16.534691 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.534704 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:16.534716 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:16.534781 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:16.571279 2152780 cri.go:89] found id: ""
	I0819 13:23:16.571317 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.571330 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:16.571337 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:16.571402 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:16.604843 2152780 cri.go:89] found id: ""
	I0819 13:23:16.604878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.604889 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:16.604897 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:16.604963 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:16.641150 2152780 cri.go:89] found id: ""
	I0819 13:23:16.641181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.641190 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:16.641201 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:16.641272 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:16.676737 2152780 cri.go:89] found id: ""
	I0819 13:23:16.676763 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.676773 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:16.676782 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:16.676847 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:16.709876 2152780 cri.go:89] found id: ""
	I0819 13:23:16.709908 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.709920 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:16.709928 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:16.709991 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:16.745249 2152780 cri.go:89] found id: ""
	I0819 13:23:16.745277 2152780 logs.go:276] 0 containers: []
	W0819 13:23:16.745293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:16.745304 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:16.745320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:16.796364 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:16.796400 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:16.811859 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:16.811887 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:16.886118 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:16.886144 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:16.886158 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.968304 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:16.968349 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.511448 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:19.523952 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:19.524023 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:19.556661 2152780 cri.go:89] found id: ""
	I0819 13:23:19.556696 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.556708 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:19.556715 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:19.556784 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:19.595956 2152780 cri.go:89] found id: ""
	I0819 13:23:19.595986 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.595995 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:19.596001 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:19.596055 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:19.631993 2152780 cri.go:89] found id: ""
	I0819 13:23:19.632021 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.632030 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:19.632038 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:19.632126 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:19.667871 2152780 cri.go:89] found id: ""
	I0819 13:23:19.667924 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.667935 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:19.667944 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:19.668007 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:19.703585 2152780 cri.go:89] found id: ""
	I0819 13:23:19.703615 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.703624 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:19.703630 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:19.703684 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:19.737220 2152780 cri.go:89] found id: ""
	I0819 13:23:19.737252 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.737263 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:19.737272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:19.737343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:19.775526 2152780 cri.go:89] found id: ""
	I0819 13:23:19.775570 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.775587 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:19.775595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:19.775671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:19.810611 2152780 cri.go:89] found id: ""
	I0819 13:23:19.810645 2152780 logs.go:276] 0 containers: []
	W0819 13:23:19.810656 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:19.810667 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:19.810682 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:19.855138 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:19.855167 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:19.906988 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:19.907025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:19.920451 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:19.920481 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:19.989054 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:19.989086 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:19.989101 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:16.803686 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.804090 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.804422 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:16.463889 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.961085 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:20.961248 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:18.845050 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:21.344704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.571624 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:22.585585 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:22.585667 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:22.620456 2152780 cri.go:89] found id: ""
	I0819 13:23:22.620488 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.620497 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:22.620509 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:22.620562 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:22.658971 2152780 cri.go:89] found id: ""
	I0819 13:23:22.659005 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.659017 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:22.659024 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:22.659094 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:22.696236 2152780 cri.go:89] found id: ""
	I0819 13:23:22.696269 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.696281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:22.696290 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:22.696354 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:22.735260 2152780 cri.go:89] found id: ""
	I0819 13:23:22.735296 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.735306 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:22.735313 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:22.735365 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.770593 2152780 cri.go:89] found id: ""
	I0819 13:23:22.770625 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.770637 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:22.770644 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:22.770716 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:22.806348 2152780 cri.go:89] found id: ""
	I0819 13:23:22.806375 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.806400 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:22.806418 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:22.806489 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:22.843812 2152780 cri.go:89] found id: ""
	I0819 13:23:22.843841 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.843850 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:22.843858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:22.843929 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:22.880607 2152780 cri.go:89] found id: ""
	I0819 13:23:22.880641 2152780 logs.go:276] 0 containers: []
	W0819 13:23:22.880655 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:22.880669 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:22.880695 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:22.930816 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:22.930855 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:22.946795 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:22.946829 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:23.021065 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:23.021094 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:23.021110 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:23.098083 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:23.098129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:25.641104 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:25.654226 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:25.654297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:25.689403 2152780 cri.go:89] found id: ""
	I0819 13:23:25.689429 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.689437 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:25.689445 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:25.689496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:25.725340 2152780 cri.go:89] found id: ""
	I0819 13:23:25.725380 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.725393 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:25.725401 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:25.725472 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:25.759133 2152780 cri.go:89] found id: ""
	I0819 13:23:25.759168 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.759178 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:25.759184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:25.759253 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:25.793605 2152780 cri.go:89] found id: ""
	I0819 13:23:25.793636 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.793649 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:25.793657 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:25.793731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:22.804810 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.305119 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:22.961560 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.460663 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:23.344790 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.344975 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.843772 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:25.828355 2152780 cri.go:89] found id: ""
	I0819 13:23:25.828391 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.828403 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:25.828411 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:25.828488 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:25.869621 2152780 cri.go:89] found id: ""
	I0819 13:23:25.869658 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.869669 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:25.869676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:25.869731 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:25.907749 2152780 cri.go:89] found id: ""
	I0819 13:23:25.907786 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.907795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:25.907801 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:25.907870 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:25.942443 2152780 cri.go:89] found id: ""
	I0819 13:23:25.942472 2152780 logs.go:276] 0 containers: []
	W0819 13:23:25.942484 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:25.942497 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:25.942515 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:26.016016 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:26.016044 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:26.016058 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:26.097427 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:26.097475 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:26.140275 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:26.140320 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:26.189858 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:26.189894 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:28.705027 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:28.717999 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:28.718071 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:28.752233 2152780 cri.go:89] found id: ""
	I0819 13:23:28.752263 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.752279 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:28.752286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:28.752343 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:28.786033 2152780 cri.go:89] found id: ""
	I0819 13:23:28.786071 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.786080 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:28.786086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:28.786143 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:28.821105 2152780 cri.go:89] found id: ""
	I0819 13:23:28.821135 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.821165 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:28.821173 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:28.821243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:28.859179 2152780 cri.go:89] found id: ""
	I0819 13:23:28.859226 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.859237 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:28.859244 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:28.859319 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:28.894213 2152780 cri.go:89] found id: ""
	I0819 13:23:28.894250 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.894263 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:28.894272 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:28.894351 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:28.927807 2152780 cri.go:89] found id: ""
	I0819 13:23:28.927836 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.927848 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:28.927856 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:28.927924 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:28.965430 2152780 cri.go:89] found id: ""
	I0819 13:23:28.965464 2152780 logs.go:276] 0 containers: []
	W0819 13:23:28.965476 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:28.965485 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:28.965555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:29.008894 2152780 cri.go:89] found id: ""
	I0819 13:23:29.008920 2152780 logs.go:276] 0 containers: []
	W0819 13:23:29.008928 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:29.008938 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:29.008951 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:29.089617 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:29.089644 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:29.089657 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:29.173271 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:29.173310 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:29.216057 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:29.216098 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:29.267076 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:29.267131 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:27.804872 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:30.304909 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:27.461788 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.961548 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:29.844049 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.846007 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.782574 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:31.797583 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:31.797655 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:31.836889 2152780 cri.go:89] found id: ""
	I0819 13:23:31.836926 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.836939 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:31.836948 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:31.837021 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:31.875693 2152780 cri.go:89] found id: ""
	I0819 13:23:31.875725 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.875734 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:31.875740 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:31.875813 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:31.909847 2152780 cri.go:89] found id: ""
	I0819 13:23:31.909878 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.909887 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:31.909895 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:31.909971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:31.944122 2152780 cri.go:89] found id: ""
	I0819 13:23:31.944155 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.944184 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:31.944193 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:31.944265 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:31.979342 2152780 cri.go:89] found id: ""
	I0819 13:23:31.979373 2152780 logs.go:276] 0 containers: []
	W0819 13:23:31.979383 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:31.979390 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:31.979461 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:32.013001 2152780 cri.go:89] found id: ""
	I0819 13:23:32.013049 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.013060 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:32.013067 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:32.013148 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:32.047975 2152780 cri.go:89] found id: ""
	I0819 13:23:32.048003 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.048012 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:32.048017 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:32.048074 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:32.082151 2152780 cri.go:89] found id: ""
	I0819 13:23:32.082181 2152780 logs.go:276] 0 containers: []
	W0819 13:23:32.082192 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:32.082205 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:32.082241 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:32.120254 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:32.120286 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.171594 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:32.171634 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:32.185654 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:32.185687 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:32.251542 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:32.251568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:32.251586 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:34.831302 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:34.846902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:34.846980 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:34.883697 2152780 cri.go:89] found id: ""
	I0819 13:23:34.883731 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.883747 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:34.883756 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:34.883821 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:34.918184 2152780 cri.go:89] found id: ""
	I0819 13:23:34.918220 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.918232 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:34.918240 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:34.918321 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:34.951621 2152780 cri.go:89] found id: ""
	I0819 13:23:34.951653 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.951664 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:34.951673 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:34.951735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:34.987330 2152780 cri.go:89] found id: ""
	I0819 13:23:34.987362 2152780 logs.go:276] 0 containers: []
	W0819 13:23:34.987373 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:34.987381 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:34.987454 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:35.022480 2152780 cri.go:89] found id: ""
	I0819 13:23:35.022518 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.022531 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:35.022539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:35.022606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:35.059717 2152780 cri.go:89] found id: ""
	I0819 13:23:35.059753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.059769 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:35.059776 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:35.059835 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:35.098877 2152780 cri.go:89] found id: ""
	I0819 13:23:35.098912 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.098924 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:35.098932 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:35.098995 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:35.137392 2152780 cri.go:89] found id: ""
	I0819 13:23:35.137422 2152780 logs.go:276] 0 containers: []
	W0819 13:23:35.137429 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:35.137438 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:35.137456 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:35.151134 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:35.151177 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:35.216821 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:35.216852 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:35.216870 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:35.300662 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:35.300717 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:35.344627 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:35.344661 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:32.803063 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.804202 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:31.962287 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.460360 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:34.344876 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.345734 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:37.899152 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:37.913091 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:37.913161 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:37.949684 2152780 cri.go:89] found id: ""
	I0819 13:23:37.949713 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.949722 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:37.949728 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:37.949791 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:37.984480 2152780 cri.go:89] found id: ""
	I0819 13:23:37.984509 2152780 logs.go:276] 0 containers: []
	W0819 13:23:37.984518 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:37.984524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:37.984591 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:38.022228 2152780 cri.go:89] found id: ""
	I0819 13:23:38.022264 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.022277 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:38.022285 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:38.022360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:38.056117 2152780 cri.go:89] found id: ""
	I0819 13:23:38.056147 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.056159 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:38.056184 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:38.056252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:38.089157 2152780 cri.go:89] found id: ""
	I0819 13:23:38.089187 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.089204 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:38.089212 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:38.089279 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:38.123335 2152780 cri.go:89] found id: ""
	I0819 13:23:38.123369 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.123378 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:38.123384 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:38.123444 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:38.159555 2152780 cri.go:89] found id: ""
	I0819 13:23:38.159584 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.159596 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:38.159604 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:38.159671 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:38.200728 2152780 cri.go:89] found id: ""
	I0819 13:23:38.200765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:38.200777 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:38.200789 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:38.200809 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:38.251652 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:38.251684 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:38.264834 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:38.264862 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:38.341259 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:38.341287 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:38.341304 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:38.417381 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:38.417423 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:36.805360 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:39.303731 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:36.461458 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.962012 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:38.347922 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.845195 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:40.959227 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:40.972845 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:40.972937 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:41.009180 2152780 cri.go:89] found id: ""
	I0819 13:23:41.009219 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.009233 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:41.009241 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:41.009313 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:41.044055 2152780 cri.go:89] found id: ""
	I0819 13:23:41.044088 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.044100 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:41.044109 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:41.044203 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:41.081073 2152780 cri.go:89] found id: ""
	I0819 13:23:41.081098 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.081108 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:41.081114 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:41.081166 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:41.113973 2152780 cri.go:89] found id: ""
	I0819 13:23:41.114011 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.114024 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:41.114031 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:41.114105 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:41.149480 2152780 cri.go:89] found id: ""
	I0819 13:23:41.149515 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.149527 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:41.149539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:41.149632 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:41.183756 2152780 cri.go:89] found id: ""
	I0819 13:23:41.183788 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.183800 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:41.183808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:41.183884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:41.217629 2152780 cri.go:89] found id: ""
	I0819 13:23:41.217670 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.217683 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:41.217691 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:41.217755 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:41.254948 2152780 cri.go:89] found id: ""
	I0819 13:23:41.254987 2152780 logs.go:276] 0 containers: []
	W0819 13:23:41.255000 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:41.255011 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:41.255025 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:41.293080 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:41.293116 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.349391 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:41.349437 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:41.363720 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:41.363751 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:41.459273 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:41.459299 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:41.459315 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.043239 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:44.057981 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:44.058072 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:44.091216 2152780 cri.go:89] found id: ""
	I0819 13:23:44.091247 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.091256 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:44.091263 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:44.091336 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:44.128729 2152780 cri.go:89] found id: ""
	I0819 13:23:44.128761 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.128772 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:44.128779 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:44.128865 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:44.167730 2152780 cri.go:89] found id: ""
	I0819 13:23:44.167765 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.167777 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:44.167784 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:44.167852 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:44.202323 2152780 cri.go:89] found id: ""
	I0819 13:23:44.202355 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.202366 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:44.202373 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:44.202451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:44.237030 2152780 cri.go:89] found id: ""
	I0819 13:23:44.237074 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.237088 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:44.237096 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:44.237164 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:44.273319 2152780 cri.go:89] found id: ""
	I0819 13:23:44.273356 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.273370 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:44.273378 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:44.273459 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:44.313267 2152780 cri.go:89] found id: ""
	I0819 13:23:44.313300 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.313313 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:44.313322 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:44.313401 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:44.353073 2152780 cri.go:89] found id: ""
	I0819 13:23:44.353099 2152780 logs.go:276] 0 containers: []
	W0819 13:23:44.353107 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:44.353115 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:44.353129 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:44.366102 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:44.366147 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:44.438580 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:44.438612 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:44.438628 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:44.518200 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:44.518250 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:44.559362 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:44.559396 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:41.308823 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.804769 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:41.461933 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.961713 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:43.344606 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:45.344727 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.345821 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:47.114604 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:47.128989 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:47.129067 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:47.174286 2152780 cri.go:89] found id: ""
	I0819 13:23:47.174316 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.174324 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:47.174331 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:47.174391 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:47.208244 2152780 cri.go:89] found id: ""
	I0819 13:23:47.208282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.208291 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:47.208297 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:47.208360 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:47.245609 2152780 cri.go:89] found id: ""
	I0819 13:23:47.245638 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.245645 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:47.245651 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:47.245703 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:47.280617 2152780 cri.go:89] found id: ""
	I0819 13:23:47.280646 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.280655 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:47.280661 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:47.280726 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:47.317662 2152780 cri.go:89] found id: ""
	I0819 13:23:47.317695 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.317708 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:47.317719 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:47.317789 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:47.359113 2152780 cri.go:89] found id: ""
	I0819 13:23:47.359143 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.359152 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:47.359158 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:47.359243 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:47.396520 2152780 cri.go:89] found id: ""
	I0819 13:23:47.396550 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.396561 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:47.396568 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:47.396636 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:47.433043 2152780 cri.go:89] found id: ""
	I0819 13:23:47.433075 2152780 logs.go:276] 0 containers: []
	W0819 13:23:47.433086 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:47.433097 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:47.433111 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:47.491984 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:47.492023 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:47.505775 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:47.505805 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:47.573541 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:47.573568 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:47.573584 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:47.657102 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:47.657148 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.201855 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:50.216186 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:50.216250 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:50.254505 2152780 cri.go:89] found id: ""
	I0819 13:23:50.254542 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.254554 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:50.254561 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:50.254627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:50.288971 2152780 cri.go:89] found id: ""
	I0819 13:23:50.289006 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.289015 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:50.289021 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:50.289086 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:50.325243 2152780 cri.go:89] found id: ""
	I0819 13:23:50.325282 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.325293 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:50.325302 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:50.325377 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:50.361776 2152780 cri.go:89] found id: ""
	I0819 13:23:50.361810 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.361822 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:50.361830 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:50.361892 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:50.395380 2152780 cri.go:89] found id: ""
	I0819 13:23:50.395412 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.395422 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:50.395428 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:50.395496 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:50.429299 2152780 cri.go:89] found id: ""
	I0819 13:23:50.429336 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.429348 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:50.429356 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:50.429423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:50.471214 2152780 cri.go:89] found id: ""
	I0819 13:23:50.471236 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.471243 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:50.471249 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:50.471302 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:50.510258 2152780 cri.go:89] found id: ""
	I0819 13:23:50.510298 2152780 logs.go:276] 0 containers: []
	W0819 13:23:50.510309 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:50.510322 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:50.510342 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:50.547398 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:50.547431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:50.598425 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:50.598466 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:50.613308 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:50.613341 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:50.685803 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:50.685834 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:50.685851 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:46.303155 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.305240 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.804319 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:46.461162 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:48.961356 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:50.961557 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:49.844368 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.345173 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:53.264007 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:53.277505 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:53.277596 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:53.320407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.320436 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.320444 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:53.320450 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:53.320508 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:53.364578 2152780 cri.go:89] found id: ""
	I0819 13:23:53.364628 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.364650 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:53.364660 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:53.364732 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:53.404407 2152780 cri.go:89] found id: ""
	I0819 13:23:53.404437 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.404446 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:53.404452 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:53.404522 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:53.440743 2152780 cri.go:89] found id: ""
	I0819 13:23:53.440775 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.440784 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:53.440791 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:53.440858 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:53.476151 2152780 cri.go:89] found id: ""
	I0819 13:23:53.476200 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.476211 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:53.476217 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:53.476297 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:53.512531 2152780 cri.go:89] found id: ""
	I0819 13:23:53.512559 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.512567 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:53.512573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:53.512640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:53.562487 2152780 cri.go:89] found id: ""
	I0819 13:23:53.562524 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.562535 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:53.562543 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:53.562617 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:53.627507 2152780 cri.go:89] found id: ""
	I0819 13:23:53.627545 2152780 logs.go:276] 0 containers: []
	W0819 13:23:53.627556 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:53.627566 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:53.627587 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:53.686224 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:53.686270 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:53.700659 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:53.700693 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:53.776506 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:53.776544 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:53.776564 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:53.855375 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:53.855412 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:53.305180 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.803826 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:52.962161 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:55.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:54.345531 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.845217 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:56.398105 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:56.411245 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:56.411312 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:56.444097 2152780 cri.go:89] found id: ""
	I0819 13:23:56.444129 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.444138 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:56.444144 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:56.444214 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:56.480958 2152780 cri.go:89] found id: ""
	I0819 13:23:56.480993 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.481004 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:56.481013 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:56.481077 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:56.514657 2152780 cri.go:89] found id: ""
	I0819 13:23:56.514687 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.514696 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:56.514707 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:56.514762 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:56.548615 2152780 cri.go:89] found id: ""
	I0819 13:23:56.548647 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.548659 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:56.548667 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:56.548735 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:56.585801 2152780 cri.go:89] found id: ""
	I0819 13:23:56.585833 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.585844 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:56.585852 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:56.585916 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:56.617804 2152780 cri.go:89] found id: ""
	I0819 13:23:56.617832 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.617843 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:56.617851 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:56.617919 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:56.655726 2152780 cri.go:89] found id: ""
	I0819 13:23:56.655753 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.655761 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:56.655768 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:56.655820 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:56.691057 2152780 cri.go:89] found id: ""
	I0819 13:23:56.691087 2152780 logs.go:276] 0 containers: []
	W0819 13:23:56.691099 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:56.691111 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:56.691127 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:56.758836 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:56.758860 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:56.758879 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:56.836388 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:56.836427 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:56.875753 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:56.875781 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:56.928126 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:56.928184 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.442721 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:23:59.456581 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:23:59.456742 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:23:59.492452 2152780 cri.go:89] found id: ""
	I0819 13:23:59.492480 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.492488 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:23:59.492494 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:23:59.492544 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:23:59.527258 2152780 cri.go:89] found id: ""
	I0819 13:23:59.527291 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.527301 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:23:59.527311 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:23:59.527366 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:23:59.562712 2152780 cri.go:89] found id: ""
	I0819 13:23:59.562744 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.562757 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:23:59.562765 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:23:59.562836 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:23:59.597522 2152780 cri.go:89] found id: ""
	I0819 13:23:59.597557 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.597568 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:23:59.597577 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:23:59.597649 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:23:59.638820 2152780 cri.go:89] found id: ""
	I0819 13:23:59.638852 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.638865 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:23:59.638873 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:23:59.638941 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:23:59.682623 2152780 cri.go:89] found id: ""
	I0819 13:23:59.682656 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.682667 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:23:59.682676 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:23:59.682746 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:23:59.718731 2152780 cri.go:89] found id: ""
	I0819 13:23:59.718782 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.718795 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:23:59.718808 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:23:59.718881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:23:59.754799 2152780 cri.go:89] found id: ""
	I0819 13:23:59.754830 2152780 logs.go:276] 0 containers: []
	W0819 13:23:59.754840 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:23:59.754866 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:23:59.754890 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:23:59.807650 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:23:59.807683 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:23:59.821699 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:23:59.821727 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:23:59.901940 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:23:59.901962 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:23:59.901975 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:23:59.979019 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:23:59.979055 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:23:57.804198 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:00.303807 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:57.960616 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.961190 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:23:59.344403 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:01.344800 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.519279 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:02.532541 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:02.532606 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:02.566287 2152780 cri.go:89] found id: ""
	I0819 13:24:02.566327 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.566338 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:02.566347 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:02.566423 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:02.599874 2152780 cri.go:89] found id: ""
	I0819 13:24:02.599907 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.599920 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:02.599927 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:02.600000 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:02.633609 2152780 cri.go:89] found id: ""
	I0819 13:24:02.633642 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.633662 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:02.633671 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:02.633741 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:02.669105 2152780 cri.go:89] found id: ""
	I0819 13:24:02.669133 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.669142 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:02.669151 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:02.669245 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:02.704719 2152780 cri.go:89] found id: ""
	I0819 13:24:02.704756 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.704769 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:02.704777 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:02.704846 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:02.744694 2152780 cri.go:89] found id: ""
	I0819 13:24:02.744726 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.744736 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:02.744743 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:02.744818 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.779911 2152780 cri.go:89] found id: ""
	I0819 13:24:02.779945 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.779959 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:02.779969 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:02.780034 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:02.815235 2152780 cri.go:89] found id: ""
	I0819 13:24:02.815277 2152780 logs.go:276] 0 containers: []
	W0819 13:24:02.815293 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:02.815303 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:02.815319 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:02.864822 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:02.864852 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:02.879626 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:02.879655 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:02.951475 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:02.951504 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:02.951522 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:03.031154 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:03.031196 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.573907 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:05.586737 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:05.586798 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:05.623912 2152780 cri.go:89] found id: ""
	I0819 13:24:05.623946 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.623957 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:05.623965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:05.624037 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:05.659399 2152780 cri.go:89] found id: ""
	I0819 13:24:05.659427 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.659435 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:05.659441 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:05.659494 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:05.693173 2152780 cri.go:89] found id: ""
	I0819 13:24:05.693207 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.693219 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:05.693228 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:05.693289 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:05.725641 2152780 cri.go:89] found id: ""
	I0819 13:24:05.725670 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.725680 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:05.725688 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:05.725752 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:05.759559 2152780 cri.go:89] found id: ""
	I0819 13:24:05.759593 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.759606 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:05.759614 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:05.759686 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:05.793022 2152780 cri.go:89] found id: ""
	I0819 13:24:05.793051 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.793063 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:05.793071 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:05.793147 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:02.304309 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.304491 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:02.461249 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:04.962079 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:03.344928 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.845172 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:05.828997 2152780 cri.go:89] found id: ""
	I0819 13:24:05.829027 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.829037 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:05.829045 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:05.829111 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:05.862852 2152780 cri.go:89] found id: ""
	I0819 13:24:05.862885 2152780 logs.go:276] 0 containers: []
	W0819 13:24:05.862895 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:05.862908 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:05.862924 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:05.901295 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:05.901334 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:05.956106 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:05.956146 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:05.971498 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:05.971525 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:06.040630 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:06.040653 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:06.040670 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:08.617393 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:08.631902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:08.631971 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:08.669456 2152780 cri.go:89] found id: ""
	I0819 13:24:08.669512 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.669526 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:08.669539 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:08.669630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:08.704825 2152780 cri.go:89] found id: ""
	I0819 13:24:08.704857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.704866 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:08.704874 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:08.704933 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:08.738841 2152780 cri.go:89] found id: ""
	I0819 13:24:08.738879 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.738893 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:08.738902 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:08.738961 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:08.775865 2152780 cri.go:89] found id: ""
	I0819 13:24:08.775899 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.775909 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:08.775918 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:08.775981 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:08.813452 2152780 cri.go:89] found id: ""
	I0819 13:24:08.813482 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.813493 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:08.813501 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:08.813572 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:08.850638 2152780 cri.go:89] found id: ""
	I0819 13:24:08.850669 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.850681 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:08.850690 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:08.850758 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:08.886750 2152780 cri.go:89] found id: ""
	I0819 13:24:08.886789 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.886801 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:08.886812 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:08.886881 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:08.922112 2152780 cri.go:89] found id: ""
	I0819 13:24:08.922145 2152780 logs.go:276] 0 containers: []
	W0819 13:24:08.922154 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:08.922164 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:08.922181 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:08.997014 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:08.997042 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:08.997059 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:09.074338 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:09.074388 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:09.114430 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:09.114460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:09.169581 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:09.169620 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:06.804761 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.809429 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:07.465846 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:09.961611 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:08.343646 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:10.344919 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:12.844008 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.685453 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:11.698438 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:11.698509 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:11.730814 2152780 cri.go:89] found id: ""
	I0819 13:24:11.730843 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.730852 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:11.730858 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:11.730914 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:11.768426 2152780 cri.go:89] found id: ""
	I0819 13:24:11.768460 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.768473 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:11.768483 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:11.768555 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:11.802718 2152780 cri.go:89] found id: ""
	I0819 13:24:11.802753 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.802765 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:11.802774 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:11.802841 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:11.838016 2152780 cri.go:89] found id: ""
	I0819 13:24:11.838058 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.838066 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:11.838073 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:11.838129 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:11.875869 2152780 cri.go:89] found id: ""
	I0819 13:24:11.875896 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.875904 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:11.875911 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:11.875967 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:11.911538 2152780 cri.go:89] found id: ""
	I0819 13:24:11.911572 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.911580 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:11.911586 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:11.911640 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:11.947762 2152780 cri.go:89] found id: ""
	I0819 13:24:11.947793 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.947803 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:11.947810 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:11.947884 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:11.989671 2152780 cri.go:89] found id: ""
	I0819 13:24:11.989706 2152780 logs.go:276] 0 containers: []
	W0819 13:24:11.989715 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:11.989726 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:11.989744 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:12.054707 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:12.054733 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:12.054749 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:12.137525 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:12.137572 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:12.175806 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:12.175834 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:12.227606 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:12.227651 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:14.743968 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:14.757178 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:14.757252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:14.791379 2152780 cri.go:89] found id: ""
	I0819 13:24:14.791409 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.791421 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:14.791430 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:14.791510 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:14.826221 2152780 cri.go:89] found id: ""
	I0819 13:24:14.826250 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.826259 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:14.826265 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:14.826327 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:14.870239 2152780 cri.go:89] found id: ""
	I0819 13:24:14.870269 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.870281 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:14.870345 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:14.870450 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:14.907473 2152780 cri.go:89] found id: ""
	I0819 13:24:14.907505 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.907515 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:14.907524 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:14.907594 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:14.942547 2152780 cri.go:89] found id: ""
	I0819 13:24:14.942585 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.942594 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:14.942600 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:14.942658 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:14.982507 2152780 cri.go:89] found id: ""
	I0819 13:24:14.982536 2152780 logs.go:276] 0 containers: []
	W0819 13:24:14.982547 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:14.982555 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:14.982627 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:15.020572 2152780 cri.go:89] found id: ""
	I0819 13:24:15.020604 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.020612 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:15.020619 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:15.020691 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:15.057076 2152780 cri.go:89] found id: ""
	I0819 13:24:15.057105 2152780 logs.go:276] 0 containers: []
	W0819 13:24:15.057114 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:15.057123 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:15.057138 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:15.109111 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:15.109149 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:15.124049 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:15.124076 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:15.187413 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:15.187444 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:15.187460 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:15.271009 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:15.271049 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:11.303722 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.804077 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:15.804497 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:11.961757 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:13.961949 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:14.844944 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.344565 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:17.810205 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:17.823344 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:17.823417 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:17.862704 2152780 cri.go:89] found id: ""
	I0819 13:24:17.862734 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.862743 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:17.862750 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:17.862807 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:17.900371 2152780 cri.go:89] found id: ""
	I0819 13:24:17.900404 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.900414 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:17.900420 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:17.900476 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:17.935518 2152780 cri.go:89] found id: ""
	I0819 13:24:17.935549 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.935559 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:17.935566 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:17.935622 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:17.972592 2152780 cri.go:89] found id: ""
	I0819 13:24:17.972624 2152780 logs.go:276] 0 containers: []
	W0819 13:24:17.972637 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:17.972647 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:17.972717 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:18.008636 2152780 cri.go:89] found id: ""
	I0819 13:24:18.008675 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.008686 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:18.008698 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:18.008775 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:18.040823 2152780 cri.go:89] found id: ""
	I0819 13:24:18.040857 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.040866 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:18.040872 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:18.040934 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:18.079443 2152780 cri.go:89] found id: ""
	I0819 13:24:18.079472 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.079481 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:18.079488 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:18.079547 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:18.118717 2152780 cri.go:89] found id: ""
	I0819 13:24:18.118746 2152780 logs.go:276] 0 containers: []
	W0819 13:24:18.118756 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:18.118766 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:18.118780 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:18.172588 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:18.172631 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:18.186427 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:18.186457 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:18.258534 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:18.258567 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:18.258585 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:18.335882 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:18.335922 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:17.804682 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.805544 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:16.461777 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:18.462005 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:20.960925 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:19.345299 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:21.843704 2152365 pod_ready.go:103] pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.844383 2152365 pod_ready.go:82] duration metric: took 4m0.006413479s for pod "metrics-server-6867b74b74-hmghm" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:22.844420 2152365 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 13:24:22.844431 2152365 pod_ready.go:39] duration metric: took 4m3.697261125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:22.844451 2152365 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:24:22.844496 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:22.844605 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:22.894620 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:22.894650 2152365 cri.go:89] found id: ""
	I0819 13:24:22.894661 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:22.894731 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.899300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:22.899365 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:22.934260 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:22.934285 2152365 cri.go:89] found id: ""
	I0819 13:24:22.934320 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:22.934380 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.939865 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:22.939927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:22.982433 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:22.982464 2152365 cri.go:89] found id: ""
	I0819 13:24:22.982475 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:22.982550 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:22.987038 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:22.987119 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:23.025231 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.025264 2152365 cri.go:89] found id: ""
	I0819 13:24:23.025275 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:23.025349 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.030567 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:23.030633 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:23.073663 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.073687 2152365 cri.go:89] found id: ""
	I0819 13:24:23.073695 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:23.073753 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.078563 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:23.078646 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:23.114597 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.114621 2152365 cri.go:89] found id: ""
	I0819 13:24:23.114629 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:23.114688 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.119022 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:23.119091 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:23.161263 2152365 cri.go:89] found id: ""
	I0819 13:24:23.161302 2152365 logs.go:276] 0 containers: []
	W0819 13:24:23.161311 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:23.161317 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:23.161373 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:23.211140 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.211167 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:23.211173 2152365 cri.go:89] found id: ""
	I0819 13:24:23.211182 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:23.211271 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.215875 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:23.220021 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:23.220046 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:23.283965 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:23.284004 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:20.878687 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:20.891965 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:20.892042 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:20.926029 2152780 cri.go:89] found id: ""
	I0819 13:24:20.926057 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.926070 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:20.926077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:20.926142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:20.961025 2152780 cri.go:89] found id: ""
	I0819 13:24:20.961056 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.961067 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:20.961077 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:20.961142 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:20.993767 2152780 cri.go:89] found id: ""
	I0819 13:24:20.993800 2152780 logs.go:276] 0 containers: []
	W0819 13:24:20.993813 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:20.993821 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:20.993886 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:21.025250 2152780 cri.go:89] found id: ""
	I0819 13:24:21.025285 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.025294 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:21.025301 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:21.025356 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:21.057832 2152780 cri.go:89] found id: ""
	I0819 13:24:21.057863 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.057871 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:21.057877 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:21.057930 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:21.092372 2152780 cri.go:89] found id: ""
	I0819 13:24:21.092400 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.092408 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:21.092419 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:21.092486 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:21.126138 2152780 cri.go:89] found id: ""
	I0819 13:24:21.126177 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.126187 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:21.126194 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:21.126262 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:21.157779 2152780 cri.go:89] found id: ""
	I0819 13:24:21.157806 2152780 logs.go:276] 0 containers: []
	W0819 13:24:21.157814 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:21.157823 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:21.157835 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:21.240776 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:21.240813 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:21.278705 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:21.278742 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:21.331086 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:21.331125 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:21.346399 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:21.346431 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:21.424929 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:23.925325 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:23.938286 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:23.938380 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:23.974038 2152780 cri.go:89] found id: ""
	I0819 13:24:23.974070 2152780 logs.go:276] 0 containers: []
	W0819 13:24:23.974079 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:24:23.974086 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:23.974144 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:24.007828 2152780 cri.go:89] found id: ""
	I0819 13:24:24.007865 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.007876 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:24:24.007883 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:24.007951 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:24.042152 2152780 cri.go:89] found id: ""
	I0819 13:24:24.042182 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.042192 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:24:24.042198 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:24.042252 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:24.076334 2152780 cri.go:89] found id: ""
	I0819 13:24:24.076366 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.076376 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:24:24.076382 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:24.076451 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:24.116333 2152780 cri.go:89] found id: ""
	I0819 13:24:24.116368 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.116380 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:24:24.116388 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:24.116449 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:24.157126 2152780 cri.go:89] found id: ""
	I0819 13:24:24.157154 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.157163 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:24:24.157170 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:24.157224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:24.197124 2152780 cri.go:89] found id: ""
	I0819 13:24:24.197153 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.197161 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:24.197167 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:24:24.197224 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:24:24.232662 2152780 cri.go:89] found id: ""
	I0819 13:24:24.232690 2152780 logs.go:276] 0 containers: []
	W0819 13:24:24.232698 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:24:24.232709 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:24.232721 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.310535 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.310573 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.351436 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:24.351469 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:24.409022 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.409061 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.423392 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.423426 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:24:24.499638 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:24:22.305024 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:24.305228 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:22.961886 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:25.460596 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.000428 2152780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:27.016949 2152780 kubeadm.go:597] duration metric: took 4m2.883005064s to restartPrimaryControlPlane
	W0819 13:24:27.017029 2152780 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:27.017066 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:27.478402 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:27.494351 2152780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:24:27.505001 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:24:27.515444 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:24:27.515467 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:24:27.515523 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:24:27.524646 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:24:27.524697 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:24:27.535343 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:24:27.544402 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:24:27.544456 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:24:27.554655 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.564636 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:24:27.564709 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:24:27.575078 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:24:27.584856 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:24:27.584924 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:24:27.595099 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:24:27.675533 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:24:27.675638 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:24:27.827613 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:24:27.827770 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:24:27.827924 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:24:28.032127 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:24:23.323805 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:23.323834 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:23.400053 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:23.400107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:23.438155 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:23.438196 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:23.483737 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:23.483772 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:23.536000 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:23.536042 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:23.574211 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:23.574258 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:23.620141 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:23.620203 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:24.130515 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:24.130556 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:24.185924 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:24.185959 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:24.202306 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:24.202348 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:24.342635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:24.342675 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.895433 2152365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:24:26.911884 2152365 api_server.go:72] duration metric: took 4m15.522363036s to wait for apiserver process to appear ...
	I0819 13:24:26.911914 2152365 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:24:26.911959 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:26.912026 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:26.949386 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:26.949423 2152365 cri.go:89] found id: ""
	I0819 13:24:26.949436 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:26.949509 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.954854 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:26.954937 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:26.993702 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:26.993732 2152365 cri.go:89] found id: ""
	I0819 13:24:26.993746 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:26.993806 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:26.997800 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:26.997873 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:27.044488 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:27.044521 2152365 cri.go:89] found id: ""
	I0819 13:24:27.044532 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:27.044612 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.049411 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:27.049488 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:27.098304 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.098328 2152365 cri.go:89] found id: ""
	I0819 13:24:27.098336 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:27.098388 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.102393 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:27.102448 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:27.138927 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.138952 2152365 cri.go:89] found id: ""
	I0819 13:24:27.138960 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:27.139019 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.143058 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:27.143113 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:27.183816 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:27.183846 2152365 cri.go:89] found id: ""
	I0819 13:24:27.183856 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:27.183918 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.188818 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:27.188880 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:27.231255 2152365 cri.go:89] found id: ""
	I0819 13:24:27.231286 2152365 logs.go:276] 0 containers: []
	W0819 13:24:27.231294 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:27.231300 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:27.231352 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:27.265165 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.265189 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:27.265194 2152365 cri.go:89] found id: ""
	I0819 13:24:27.265204 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:27.265267 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.269294 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:27.273771 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:27.273796 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:27.315407 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:27.315446 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:27.359309 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:27.359343 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:27.817261 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:27.817316 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:27.834635 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:27.834676 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:27.888137 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:27.888194 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:27.940367 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:27.940402 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:27.981404 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:27.981448 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:28.026666 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:28.026702 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:28.084387 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:28.084419 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:28.160615 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:28.160667 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:28.276547 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:28.276584 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:28.033928 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:24:28.034048 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:24:28.034129 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:24:28.034243 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:24:28.034320 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:24:28.034497 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:24:28.034852 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:24:28.035397 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:24:28.035736 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:24:28.036257 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:24:28.036850 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:24:28.037014 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:24:28.037102 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:24:28.386155 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:24:28.571457 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:24:28.835239 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:24:28.967421 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:24:28.990941 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:24:28.991082 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:24:28.991148 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:24:29.149979 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:24:29.151782 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:24:29.151943 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:24:29.161108 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:24:29.161209 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:24:29.161316 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:24:29.172129 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:24:26.804321 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.304110 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:27.461339 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:29.462208 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:28.316651 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:28.316701 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:30.876679 2152365 api_server.go:253] Checking apiserver healthz at https://192.168.72.69:8444/healthz ...
	I0819 13:24:30.882671 2152365 api_server.go:279] https://192.168.72.69:8444/healthz returned 200:
	ok
	I0819 13:24:30.883817 2152365 api_server.go:141] control plane version: v1.31.0
	I0819 13:24:30.883842 2152365 api_server.go:131] duration metric: took 3.971920414s to wait for apiserver health ...
	I0819 13:24:30.883850 2152365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:24:30.883876 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:24:30.883927 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:24:30.921225 2152365 cri.go:89] found id: "c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:30.921255 2152365 cri.go:89] found id: ""
	I0819 13:24:30.921266 2152365 logs.go:276] 1 containers: [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b]
	I0819 13:24:30.921336 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.925961 2152365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:24:30.926033 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:24:30.963834 2152365 cri.go:89] found id: "9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:30.963861 2152365 cri.go:89] found id: ""
	I0819 13:24:30.963872 2152365 logs.go:276] 1 containers: [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1]
	I0819 13:24:30.963947 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:30.968020 2152365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:24:30.968085 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:24:31.004690 2152365 cri.go:89] found id: "0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.004717 2152365 cri.go:89] found id: ""
	I0819 13:24:31.004726 2152365 logs.go:276] 1 containers: [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6]
	I0819 13:24:31.004784 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.008770 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:24:31.008832 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:24:31.045412 2152365 cri.go:89] found id: "875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.045440 2152365 cri.go:89] found id: ""
	I0819 13:24:31.045451 2152365 logs.go:276] 1 containers: [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872]
	I0819 13:24:31.045508 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.049682 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:24:31.049759 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:24:31.090853 2152365 cri.go:89] found id: "0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.090946 2152365 cri.go:89] found id: ""
	I0819 13:24:31.090958 2152365 logs.go:276] 1 containers: [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226]
	I0819 13:24:31.091037 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.095245 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:24:31.095311 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:24:31.138661 2152365 cri.go:89] found id: "90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.138693 2152365 cri.go:89] found id: ""
	I0819 13:24:31.138705 2152365 logs.go:276] 1 containers: [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27]
	I0819 13:24:31.138775 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.143216 2152365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:24:31.143283 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:24:31.186554 2152365 cri.go:89] found id: ""
	I0819 13:24:31.186580 2152365 logs.go:276] 0 containers: []
	W0819 13:24:31.186589 2152365 logs.go:278] No container was found matching "kindnet"
	I0819 13:24:31.186595 2152365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:24:31.186645 2152365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:24:31.226218 2152365 cri.go:89] found id: "f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.226257 2152365 cri.go:89] found id: "8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:31.226261 2152365 cri.go:89] found id: ""
	I0819 13:24:31.226270 2152365 logs.go:276] 2 containers: [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778]
	I0819 13:24:31.226333 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.230602 2152365 ssh_runner.go:195] Run: which crictl
	I0819 13:24:31.234269 2152365 logs.go:123] Gathering logs for kubelet ...
	I0819 13:24:31.234290 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:24:31.299071 2152365 logs.go:123] Gathering logs for kube-scheduler [875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872] ...
	I0819 13:24:31.299107 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 875409a40965325eb1eabe05f15b07b2154560d072394c8af5b95a66d8c2b872"
	I0819 13:24:31.336594 2152365 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:24:31.336633 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:24:31.745390 2152365 logs.go:123] Gathering logs for etcd [9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1] ...
	I0819 13:24:31.745441 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2caf34bd6146426a4933fe826a2c6708ad6fd69259ad6d592e883679cd4cf1"
	I0819 13:24:31.797446 2152365 logs.go:123] Gathering logs for coredns [0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6] ...
	I0819 13:24:31.797479 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e12220c2cd139bbf19334440b646bfb80995d21e6f8d72b1150f048972b13c6"
	I0819 13:24:31.838882 2152365 logs.go:123] Gathering logs for kube-proxy [0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226] ...
	I0819 13:24:31.838916 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0eea9deb6fa46bc5e327ce412442ab5bc8729faf1ca15bcf61d41b9223fb3226"
	I0819 13:24:31.876830 2152365 logs.go:123] Gathering logs for kube-controller-manager [90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27] ...
	I0819 13:24:31.876871 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90eea23cc577eb8d64fa5dde3f4e180af97075ba8e5459a6bfa16af032886e27"
	I0819 13:24:31.935426 2152365 logs.go:123] Gathering logs for storage-provisioner [f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f] ...
	I0819 13:24:31.935461 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f547b541f49f126bb3b60cebd1d59fb4f0f4c78e95e06730551166994978647f"
	I0819 13:24:31.975182 2152365 logs.go:123] Gathering logs for dmesg ...
	I0819 13:24:31.975212 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:24:31.989631 2152365 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:24:31.989663 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:24:32.093154 2152365 logs.go:123] Gathering logs for kube-apiserver [c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b] ...
	I0819 13:24:32.093190 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c977986999bce2ef3361c889967423232b4d62167034a0f797494264227f6a1b"
	I0819 13:24:32.142869 2152365 logs.go:123] Gathering logs for storage-provisioner [8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778] ...
	I0819 13:24:32.142906 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8613bb1986481d7f064809fa3d59eb37faed06e7b6ea894e177bf997d4492778"
	I0819 13:24:32.179008 2152365 logs.go:123] Gathering logs for container status ...
	I0819 13:24:32.179043 2152365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:24:34.726569 2152365 system_pods.go:59] 8 kube-system pods found
	I0819 13:24:34.726604 2152365 system_pods.go:61] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.726609 2152365 system_pods.go:61] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.726613 2152365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.726619 2152365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.726622 2152365 system_pods.go:61] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.726625 2152365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.726633 2152365 system_pods.go:61] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.726636 2152365 system_pods.go:61] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.726644 2152365 system_pods.go:74] duration metric: took 3.84278789s to wait for pod list to return data ...
	I0819 13:24:34.726654 2152365 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:24:34.729710 2152365 default_sa.go:45] found service account: "default"
	I0819 13:24:34.729745 2152365 default_sa.go:55] duration metric: took 3.082291ms for default service account to be created ...
	I0819 13:24:34.729758 2152365 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:24:34.734698 2152365 system_pods.go:86] 8 kube-system pods found
	I0819 13:24:34.734734 2152365 system_pods.go:89] "coredns-6f6b679f8f-8tl4c" [8eec97b9-9cf7-48a1-9025-744336d756f8] Running
	I0819 13:24:34.734744 2152365 system_pods.go:89] "etcd-default-k8s-diff-port-038572" [539d43de-8359-4b68-abc5-3c5623aaa72a] Running
	I0819 13:24:34.734752 2152365 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038572" [cad403c2-e80e-4b34-a29a-3e6655859dba] Running
	I0819 13:24:34.734758 2152365 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038572" [9d1e8c27-ac16-4aa1-a563-be07036c86a4] Running
	I0819 13:24:34.734765 2152365 system_pods.go:89] "kube-proxy-wnkk2" [84c07972-b838-4523-ac86-3cd5dd597c04] Running
	I0819 13:24:34.734768 2152365 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038572" [c49352bc-c011-4213-a4ac-7df414350ded] Running
	I0819 13:24:34.734775 2152365 system_pods.go:89] "metrics-server-6867b74b74-hmghm" [73577eaa-4632-432b-8c5f-7421646873a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:24:34.734784 2152365 system_pods.go:89] "storage-provisioner" [949a9312-bef2-4c56-af13-a895bec28e1c] Running
	I0819 13:24:34.734792 2152365 system_pods.go:126] duration metric: took 5.027633ms to wait for k8s-apps to be running ...
	I0819 13:24:34.734802 2152365 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:24:34.734863 2152365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:24:34.756925 2152365 system_svc.go:56] duration metric: took 22.110614ms WaitForService to wait for kubelet
	I0819 13:24:34.756964 2152365 kubeadm.go:582] duration metric: took 4m23.367451416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:24:34.757007 2152365 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:24:34.760465 2152365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:24:34.760497 2152365 node_conditions.go:123] node cpu capacity is 2
	I0819 13:24:34.760510 2152365 node_conditions.go:105] duration metric: took 3.498566ms to run NodePressure ...
	I0819 13:24:34.760524 2152365 start.go:241] waiting for startup goroutines ...
	I0819 13:24:34.760531 2152365 start.go:246] waiting for cluster config update ...
	I0819 13:24:34.760552 2152365 start.go:255] writing updated cluster config ...
	I0819 13:24:34.760888 2152365 ssh_runner.go:195] Run: rm -f paused
	I0819 13:24:34.814139 2152365 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:24:34.815906 2152365 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-038572" cluster and "default" namespace by default
	I0819 13:24:31.305059 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.804408 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.804797 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:31.462264 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:33.961404 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:35.961719 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.305696 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.804500 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:38.460369 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:40.462138 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.804768 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.804803 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:42.462412 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:44.961368 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:46.804973 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.304762 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:47.460607 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:49.461589 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.804815 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:54.304854 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:51.961120 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:53.962098 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:55.962219 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:56.804112 2153523 pod_ready.go:103] pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace has status "Ready":"False"
	I0819 13:24:57.298511 2153523 pod_ready.go:82] duration metric: took 4m0.000785617s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" ...
	E0819 13:24:57.298548 2153523 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8dwpf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:24:57.298569 2153523 pod_ready.go:39] duration metric: took 4m6.439346081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:24:57.298603 2153523 kubeadm.go:597] duration metric: took 4m16.103432347s to restartPrimaryControlPlane
	W0819 13:24:57.298676 2153523 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:24:57.298707 2153523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:24:58.461354 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:00.961514 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:03.460885 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:05.461736 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:09.173035 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:25:09.173672 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:09.173913 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:07.962080 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:10.461273 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:14.174468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:14.174714 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:12.961725 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:15.461361 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:17.461731 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:19.961772 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:23.413286 2153523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.114546859s)
	I0819 13:25:23.413389 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:23.440190 2153523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:25:23.450707 2153523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:25:23.462984 2153523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:25:23.463003 2153523 kubeadm.go:157] found existing configuration files:
	
	I0819 13:25:23.463046 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:25:23.472381 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:25:23.472437 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:25:23.482090 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:25:23.491789 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:25:23.491845 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:25:23.501722 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.510879 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:25:23.510936 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:25:23.520464 2153523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:25:23.529582 2153523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:25:23.529642 2153523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:25:23.539089 2153523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:25:23.585240 2153523 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:25:23.585319 2153523 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:25:23.712867 2153523 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:25:23.712994 2153523 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:25:23.713118 2153523 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:25:23.722519 2153523 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:25:23.724354 2153523 out.go:235]   - Generating certificates and keys ...
	I0819 13:25:23.724436 2153523 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:25:23.724517 2153523 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:25:23.724598 2153523 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:25:23.724648 2153523 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:25:23.724702 2153523 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:25:23.724745 2153523 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:25:23.724795 2153523 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:25:23.724847 2153523 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:25:23.724904 2153523 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:25:23.725051 2153523 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:25:23.725125 2153523 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:25:23.725238 2153523 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:25:23.779977 2153523 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:25:23.930648 2153523 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:25:24.124057 2153523 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:25:24.314884 2153523 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:25:24.494351 2153523 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:25:24.495024 2153523 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:25:24.497778 2153523 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:25:24.175179 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:24.175432 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:24.499543 2153523 out.go:235]   - Booting up control plane ...
	I0819 13:25:24.499671 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:25:24.499762 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:25:24.499864 2153523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:25:24.520379 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:25:24.532777 2153523 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:25:24.532853 2153523 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:25:24.666433 2153523 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:25:24.666580 2153523 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:25:25.168265 2153523 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.973317ms
	I0819 13:25:25.168373 2153523 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:25:21.961823 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:24.462014 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:30.169290 2153523 kubeadm.go:310] [api-check] The API server is healthy after 5.001301539s
	I0819 13:25:30.182039 2153523 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:25:30.198499 2153523 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:25:30.224273 2153523 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:25:30.224513 2153523 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-823470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:25:30.236397 2153523 kubeadm.go:310] [bootstrap-token] Using token: uoaia1.lo1g4dbd1uwo2t3t
	I0819 13:25:30.237712 2153523 out.go:235]   - Configuring RBAC rules ...
	I0819 13:25:30.237846 2153523 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:25:30.245121 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:25:30.254416 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:25:30.257701 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:25:30.262104 2153523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:25:30.266592 2153523 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:25:30.577306 2153523 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:25:26.961302 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:28.962130 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:31.051204 2153523 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:25:31.576457 2153523 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:25:31.577557 2153523 kubeadm.go:310] 
	I0819 13:25:31.577644 2153523 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:25:31.577653 2153523 kubeadm.go:310] 
	I0819 13:25:31.577728 2153523 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:25:31.577738 2153523 kubeadm.go:310] 
	I0819 13:25:31.577786 2153523 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:25:31.577872 2153523 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:25:31.577949 2153523 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:25:31.577959 2153523 kubeadm.go:310] 
	I0819 13:25:31.578041 2153523 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:25:31.578051 2153523 kubeadm.go:310] 
	I0819 13:25:31.578111 2153523 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:25:31.578120 2153523 kubeadm.go:310] 
	I0819 13:25:31.578174 2153523 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:25:31.578266 2153523 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:25:31.578354 2153523 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:25:31.578383 2153523 kubeadm.go:310] 
	I0819 13:25:31.578524 2153523 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:25:31.578624 2153523 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:25:31.578641 2153523 kubeadm.go:310] 
	I0819 13:25:31.578765 2153523 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.578886 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:25:31.578922 2153523 kubeadm.go:310] 	--control-plane 
	I0819 13:25:31.578931 2153523 kubeadm.go:310] 
	I0819 13:25:31.579045 2153523 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:25:31.579055 2153523 kubeadm.go:310] 
	I0819 13:25:31.579151 2153523 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uoaia1.lo1g4dbd1uwo2t3t \
	I0819 13:25:31.579277 2153523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:25:31.580242 2153523 kubeadm.go:310] W0819 13:25:23.534803    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580611 2153523 kubeadm.go:310] W0819 13:25:23.535422    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:25:31.580758 2153523 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:25:31.580793 2153523 cni.go:84] Creating CNI manager for ""
	I0819 13:25:31.580803 2153523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:25:31.582322 2153523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:25:31.583520 2153523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:25:31.594967 2153523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:25:31.613491 2153523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:25:31.613590 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-823470 minikube.k8s.io/updated_at=2024_08_19T13_25_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=embed-certs-823470 minikube.k8s.io/primary=true
	I0819 13:25:31.613591 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.644190 2153523 ops.go:34] apiserver oom_adj: -16
	I0819 13:25:31.824270 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.324994 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:32.824955 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.325186 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:33.824797 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.325222 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:34.824306 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.325248 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:35.825154 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:31.461327 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:33.461401 2151869 pod_ready.go:103] pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:35.455588 2151869 pod_ready.go:82] duration metric: took 4m0.000741427s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" ...
	E0819 13:25:35.455630 2151869 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-hd777" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 13:25:35.455654 2151869 pod_ready.go:39] duration metric: took 4m14.499879559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:35.455689 2151869 kubeadm.go:597] duration metric: took 4m22.29129413s to restartPrimaryControlPlane
	W0819 13:25:35.455763 2151869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 13:25:35.455798 2151869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:25:36.324463 2153523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:25:36.464343 2153523 kubeadm.go:1113] duration metric: took 4.850819101s to wait for elevateKubeSystemPrivileges
	I0819 13:25:36.464399 2153523 kubeadm.go:394] duration metric: took 4m55.327792066s to StartCluster
	I0819 13:25:36.464441 2153523 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.464553 2153523 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:25:36.466407 2153523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:25:36.466694 2153523 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:25:36.466786 2153523 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:25:36.466883 2153523 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-823470"
	I0819 13:25:36.466903 2153523 addons.go:69] Setting default-storageclass=true in profile "embed-certs-823470"
	I0819 13:25:36.466908 2153523 addons.go:69] Setting metrics-server=true in profile "embed-certs-823470"
	I0819 13:25:36.466924 2153523 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-823470"
	W0819 13:25:36.466935 2153523 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:25:36.466943 2153523 addons.go:234] Setting addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:36.466949 2153523 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-823470"
	W0819 13:25:36.466961 2153523 addons.go:243] addon metrics-server should already be in state true
	I0819 13:25:36.466968 2153523 config.go:182] Loaded profile config "embed-certs-823470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:25:36.466987 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.466998 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.467388 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467397 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.467421 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467422 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.467424 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.468449 2153523 out.go:177] * Verifying Kubernetes components...
	I0819 13:25:36.469946 2153523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:25:36.484726 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0819 13:25:36.484740 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I0819 13:25:36.484824 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0819 13:25:36.485224 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485280 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485303 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.485822 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485841 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485824 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485881 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.485956 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.485982 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.486206 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486260 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486267 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.486382 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.486787 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486803 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.486819 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.486836 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.490147 2153523 addons.go:234] Setting addon default-storageclass=true in "embed-certs-823470"
	W0819 13:25:36.490169 2153523 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:25:36.490198 2153523 host.go:66] Checking if "embed-certs-823470" exists ...
	I0819 13:25:36.490599 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.490628 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.506245 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
	I0819 13:25:36.506573 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0819 13:25:36.506955 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507024 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.507467 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507485 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507599 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.507619 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.507896 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.507899 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.508055 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.508657 2153523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:25:36.508682 2153523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:25:36.509546 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.511452 2153523 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:25:36.512660 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:25:36.512680 2153523 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:25:36.512702 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.515934 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516401 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.516422 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.516608 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.516810 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.517007 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.517160 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.517459 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0819 13:25:36.517826 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.518316 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.518339 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.518689 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.518859 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.520336 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.521729 2153523 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:25:36.522775 2153523 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.522793 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:25:36.522813 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.525839 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526467 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.526505 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.526736 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.527191 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.527384 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.527540 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.527918 2153523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0819 13:25:36.528354 2153523 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:25:36.528868 2153523 main.go:141] libmachine: Using API Version  1
	I0819 13:25:36.528890 2153523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:25:36.529243 2153523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:25:36.529418 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetState
	I0819 13:25:36.530881 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .DriverName
	I0819 13:25:36.531125 2153523 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.531138 2153523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:25:36.531151 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHHostname
	I0819 13:25:36.534168 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534579 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:ac:f9", ip: ""} in network mk-embed-certs-823470: {Iface:virbr1 ExpiryTime:2024-08-19 14:20:27 +0000 UTC Type:0 Mac:52:54:00:1d:ac:f9 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:embed-certs-823470 Clientid:01:52:54:00:1d:ac:f9}
	I0819 13:25:36.534603 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | domain embed-certs-823470 has defined IP address 192.168.61.50 and MAC address 52:54:00:1d:ac:f9 in network mk-embed-certs-823470
	I0819 13:25:36.534703 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHPort
	I0819 13:25:36.534855 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHKeyPath
	I0819 13:25:36.534995 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .GetSSHUsername
	I0819 13:25:36.535105 2153523 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/embed-certs-823470/id_rsa Username:docker}
	I0819 13:25:36.723562 2153523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:25:36.740697 2153523 node_ready.go:35] waiting up to 6m0s for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749268 2153523 node_ready.go:49] node "embed-certs-823470" has status "Ready":"True"
	I0819 13:25:36.749294 2153523 node_ready.go:38] duration metric: took 8.560448ms for node "embed-certs-823470" to be "Ready" ...
	I0819 13:25:36.749306 2153523 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:36.756208 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:36.829190 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:25:36.845456 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:25:36.845487 2153523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:25:36.857828 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:25:36.928557 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:25:36.928590 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:25:36.979832 2153523 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:36.979864 2153523 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:25:37.086105 2153523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:25:37.498852 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498884 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.498909 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.498924 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499189 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499210 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499218 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499225 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499393 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499392 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499405 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499419 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.499426 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.499762 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499774 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.499771 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499761 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.499815 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.499835 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510580 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.510601 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.510894 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.510922 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.510924 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.721765 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.721791 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722119 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722143 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722160 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722171 2153523 main.go:141] libmachine: Making call to close driver server
	I0819 13:25:37.722180 2153523 main.go:141] libmachine: (embed-certs-823470) Calling .Close
	I0819 13:25:37.722502 2153523 main.go:141] libmachine: (embed-certs-823470) DBG | Closing plugin on server side
	I0819 13:25:37.722513 2153523 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:25:37.722525 2153523 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:25:37.722538 2153523 addons.go:475] Verifying addon metrics-server=true in "embed-certs-823470"
	I0819 13:25:37.724029 2153523 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 13:25:37.725177 2153523 addons.go:510] duration metric: took 1.258399079s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 13:25:38.763132 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:40.765558 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:44.175962 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:25:44.176268 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:25:43.263762 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:45.765551 2153523 pod_ready.go:103] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"False"
	I0819 13:25:46.265160 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.265189 2153523 pod_ready.go:82] duration metric: took 9.50894907s for pod "coredns-6f6b679f8f-95hnp" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.265202 2153523 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270706 2153523 pod_ready.go:93] pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.270733 2153523 pod_ready.go:82] duration metric: took 5.521974ms for pod "coredns-6f6b679f8f-nwzh6" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.270745 2153523 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275061 2153523 pod_ready.go:93] pod "etcd-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.275095 2153523 pod_ready.go:82] duration metric: took 4.331701ms for pod "etcd-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.275106 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280251 2153523 pod_ready.go:93] pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.280269 2153523 pod_ready.go:82] duration metric: took 5.157209ms for pod "kube-apiserver-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.280278 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284437 2153523 pod_ready.go:93] pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.284456 2153523 pod_ready.go:82] duration metric: took 4.173028ms for pod "kube-controller-manager-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.284464 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660549 2153523 pod_ready.go:93] pod "kube-proxy-cn9sl" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:46.660577 2153523 pod_ready.go:82] duration metric: took 376.106459ms for pod "kube-proxy-cn9sl" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:46.660588 2153523 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060405 2153523 pod_ready.go:93] pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace has status "Ready":"True"
	I0819 13:25:47.060432 2153523 pod_ready.go:82] duration metric: took 399.837833ms for pod "kube-scheduler-embed-certs-823470" in "kube-system" namespace to be "Ready" ...
	I0819 13:25:47.060440 2153523 pod_ready.go:39] duration metric: took 10.311121311s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:25:47.060458 2153523 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:25:47.060513 2153523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:25:47.076832 2153523 api_server.go:72] duration metric: took 10.610087948s to wait for apiserver process to appear ...
	I0819 13:25:47.076869 2153523 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:25:47.076894 2153523 api_server.go:253] Checking apiserver healthz at https://192.168.61.50:8443/healthz ...
	I0819 13:25:47.081037 2153523 api_server.go:279] https://192.168.61.50:8443/healthz returned 200:
	ok
	I0819 13:25:47.081960 2153523 api_server.go:141] control plane version: v1.31.0
	I0819 13:25:47.081984 2153523 api_server.go:131] duration metric: took 5.106291ms to wait for apiserver health ...
	I0819 13:25:47.081993 2153523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:25:47.264787 2153523 system_pods.go:59] 9 kube-system pods found
	I0819 13:25:47.264817 2153523 system_pods.go:61] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.264822 2153523 system_pods.go:61] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.264826 2153523 system_pods.go:61] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.264830 2153523 system_pods.go:61] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.264835 2153523 system_pods.go:61] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.264839 2153523 system_pods.go:61] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.264842 2153523 system_pods.go:61] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.264849 2153523 system_pods.go:61] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.264853 2153523 system_pods.go:61] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.264863 2153523 system_pods.go:74] duration metric: took 182.863608ms to wait for pod list to return data ...
	I0819 13:25:47.264870 2153523 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:25:47.461938 2153523 default_sa.go:45] found service account: "default"
	I0819 13:25:47.461968 2153523 default_sa.go:55] duration metric: took 197.091755ms for default service account to be created ...
	I0819 13:25:47.461978 2153523 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:25:47.664070 2153523 system_pods.go:86] 9 kube-system pods found
	I0819 13:25:47.664101 2153523 system_pods.go:89] "coredns-6f6b679f8f-95hnp" [3600e8be-6152-427e-b049-52ff758d6b44] Running
	I0819 13:25:47.664106 2153523 system_pods.go:89] "coredns-6f6b679f8f-nwzh6" [514f2cfd-88a1-4a1f-be53-4c21a33e9f3b] Running
	I0819 13:25:47.664110 2153523 system_pods.go:89] "etcd-embed-certs-823470" [b34667b5-9a7a-446a-a1a1-fa5258535f3c] Running
	I0819 13:25:47.664115 2153523 system_pods.go:89] "kube-apiserver-embed-certs-823470" [d2023ae7-ca38-4f9b-a974-21eb8fe97278] Running
	I0819 13:25:47.664119 2153523 system_pods.go:89] "kube-controller-manager-embed-certs-823470" [98ebc863-cd4a-40a0-b87c-99d75d402992] Running
	I0819 13:25:47.664122 2153523 system_pods.go:89] "kube-proxy-cn9sl" [d1d8735d-9726-4b29-9e25-ff427831e523] Running
	I0819 13:25:47.664125 2153523 system_pods.go:89] "kube-scheduler-embed-certs-823470" [88adf7e9-b80d-472a-9b0a-f590786c7ac0] Running
	I0819 13:25:47.664131 2153523 system_pods.go:89] "metrics-server-6867b74b74-hkt29" [8280db9d-3e31-4689-a2bc-fdcc8ca7f129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:25:47.664135 2153523 system_pods.go:89] "storage-provisioner" [0ed9dc08-c685-4e1a-9a78-c7c90cf2cf31] Running
	I0819 13:25:47.664146 2153523 system_pods.go:126] duration metric: took 202.162032ms to wait for k8s-apps to be running ...
	I0819 13:25:47.664157 2153523 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:25:47.664259 2153523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:25:47.680000 2153523 system_svc.go:56] duration metric: took 15.832426ms WaitForService to wait for kubelet
	I0819 13:25:47.680036 2153523 kubeadm.go:582] duration metric: took 11.213303025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:25:47.680063 2153523 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:25:47.861730 2153523 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:25:47.861762 2153523 node_conditions.go:123] node cpu capacity is 2
	I0819 13:25:47.861774 2153523 node_conditions.go:105] duration metric: took 181.706418ms to run NodePressure ...
	I0819 13:25:47.861786 2153523 start.go:241] waiting for startup goroutines ...
	I0819 13:25:47.861793 2153523 start.go:246] waiting for cluster config update ...
	I0819 13:25:47.861804 2153523 start.go:255] writing updated cluster config ...
	I0819 13:25:47.862066 2153523 ssh_runner.go:195] Run: rm -f paused
	I0819 13:25:47.913161 2153523 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:25:47.915029 2153523 out.go:177] * Done! kubectl is now configured to use "embed-certs-823470" cluster and "default" namespace by default
	I0819 13:26:01.591066 2151869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.135241188s)
	I0819 13:26:01.591140 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:01.620641 2151869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:26:01.633612 2151869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:01.647057 2151869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:01.647085 2151869 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:01.647144 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:01.663839 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:01.663903 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:01.681570 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:01.692967 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:01.693038 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:01.714743 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.723974 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:01.724044 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:01.734044 2151869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:01.743276 2151869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:01.743326 2151869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:01.752586 2151869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:01.798481 2151869 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:26:01.798564 2151869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:01.909646 2151869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:01.909801 2151869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:01.909923 2151869 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:26:01.923525 2151869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:01.925556 2151869 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:01.925670 2151869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:01.925760 2151869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:01.925900 2151869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:01.925991 2151869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:01.926102 2151869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:01.926162 2151869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:01.926248 2151869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:01.926334 2151869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:01.926449 2151869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:01.926567 2151869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:01.926632 2151869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:01.926707 2151869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:02.049009 2151869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:02.310249 2151869 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:26:02.425638 2151869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:02.664112 2151869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:03.165747 2151869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:03.166413 2151869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:03.169075 2151869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:03.170768 2151869 out.go:235]   - Booting up control plane ...
	I0819 13:26:03.170893 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:03.171003 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:03.171184 2151869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:03.190081 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:03.197988 2151869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:03.198058 2151869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:03.327052 2151869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:26:03.327180 2151869 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:26:04.332623 2151869 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002268296s
	I0819 13:26:04.332744 2151869 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:26:09.332759 2151869 kubeadm.go:310] [api-check] The API server is healthy after 5.001673024s
	I0819 13:26:09.344645 2151869 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:26:09.359806 2151869 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:26:09.393264 2151869 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:26:09.393499 2151869 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-441627 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:26:09.413036 2151869 kubeadm.go:310] [bootstrap-token] Using token: cva6zd.b3089a5fajgqfd6r
	I0819 13:26:09.414921 2151869 out.go:235]   - Configuring RBAC rules ...
	I0819 13:26:09.415128 2151869 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:26:09.420470 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:26:09.431168 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:26:09.438188 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:26:09.443406 2151869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:26:09.450497 2151869 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:26:09.737815 2151869 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:26:10.165062 2151869 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:26:10.738017 2151869 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:26:10.739202 2151869 kubeadm.go:310] 
	I0819 13:26:10.739309 2151869 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:26:10.739321 2151869 kubeadm.go:310] 
	I0819 13:26:10.739415 2151869 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:26:10.739426 2151869 kubeadm.go:310] 
	I0819 13:26:10.739469 2151869 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:26:10.739558 2151869 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:26:10.739645 2151869 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:26:10.739662 2151869 kubeadm.go:310] 
	I0819 13:26:10.739746 2151869 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:26:10.739758 2151869 kubeadm.go:310] 
	I0819 13:26:10.739833 2151869 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:26:10.739842 2151869 kubeadm.go:310] 
	I0819 13:26:10.739926 2151869 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:26:10.740039 2151869 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:26:10.740116 2151869 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:26:10.740134 2151869 kubeadm.go:310] 
	I0819 13:26:10.740272 2151869 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:26:10.740376 2151869 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:26:10.740387 2151869 kubeadm.go:310] 
	I0819 13:26:10.740487 2151869 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.740651 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 \
	I0819 13:26:10.740689 2151869 kubeadm.go:310] 	--control-plane 
	I0819 13:26:10.740699 2151869 kubeadm.go:310] 
	I0819 13:26:10.740808 2151869 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:26:10.740822 2151869 kubeadm.go:310] 
	I0819 13:26:10.740931 2151869 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cva6zd.b3089a5fajgqfd6r \
	I0819 13:26:10.741086 2151869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5872e2a321f4ad01c8829daa4e12e6f2203b521ab405522c1fecb874e15a4365 
	I0819 13:26:10.743032 2151869 kubeadm.go:310] W0819 13:26:01.774566    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743353 2151869 kubeadm.go:310] W0819 13:26:01.775291    3009 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:26:10.743487 2151869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:10.743516 2151869 cni.go:84] Creating CNI manager for ""
	I0819 13:26:10.743530 2151869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 13:26:10.745175 2151869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 13:26:10.746234 2151869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 13:26:10.756423 2151869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 13:26:10.774266 2151869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:26:10.774332 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:10.774349 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-441627 minikube.k8s.io/updated_at=2024_08_19T13_26_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=no-preload-441627 minikube.k8s.io/primary=true
	I0819 13:26:10.800516 2151869 ops.go:34] apiserver oom_adj: -16
	I0819 13:26:10.991369 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.492114 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:11.992301 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.491433 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:12.991812 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.492015 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:13.991704 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.492129 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:14.992304 2151869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:26:15.079535 2151869 kubeadm.go:1113] duration metric: took 4.305270284s to wait for elevateKubeSystemPrivileges
	I0819 13:26:15.079607 2151869 kubeadm.go:394] duration metric: took 5m1.9669805s to StartCluster
	I0819 13:26:15.079635 2151869 settings.go:142] acquiring lock: {Name:mk8440fecb791997cce7218b8bd006e220238f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.079759 2151869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:26:15.081957 2151869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-2086935/kubeconfig: {Name:mk84f32343446678800096f8797ab3ea35a6aa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:26:15.082240 2151869 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 13:26:15.082282 2151869 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:26:15.082374 2151869 addons.go:69] Setting storage-provisioner=true in profile "no-preload-441627"
	I0819 13:26:15.082413 2151869 addons.go:234] Setting addon storage-provisioner=true in "no-preload-441627"
	I0819 13:26:15.082409 2151869 addons.go:69] Setting metrics-server=true in profile "no-preload-441627"
	W0819 13:26:15.082425 2151869 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:26:15.082389 2151869 addons.go:69] Setting default-storageclass=true in profile "no-preload-441627"
	I0819 13:26:15.082449 2151869 addons.go:234] Setting addon metrics-server=true in "no-preload-441627"
	I0819 13:26:15.082461 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	W0819 13:26:15.082463 2151869 addons.go:243] addon metrics-server should already be in state true
	I0819 13:26:15.082479 2151869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-441627"
	I0819 13:26:15.082503 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.082514 2151869 config.go:182] Loaded profile config "no-preload-441627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082869 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082830 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082920 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.082925 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.082947 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.083691 2151869 out.go:177] * Verifying Kubernetes components...
	I0819 13:26:15.085493 2151869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:26:15.101376 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
	I0819 13:26:15.102091 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.102685 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.102703 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.103169 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.103181 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0819 13:26:15.103193 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0819 13:26:15.103361 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.103663 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.103732 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.104300 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104322 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.104710 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.104730 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.105081 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105409 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.105905 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.105938 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.106084 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.106119 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.107248 2151869 addons.go:234] Setting addon default-storageclass=true in "no-preload-441627"
	W0819 13:26:15.107271 2151869 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:26:15.107302 2151869 host.go:66] Checking if "no-preload-441627" exists ...
	I0819 13:26:15.107654 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.107713 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.122709 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0819 13:26:15.123128 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.123869 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.123898 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.124416 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.124634 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.126217 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I0819 13:26:15.126791 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.127400 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.127419 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.127698 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.128022 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.128336 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.129232 2151869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:26:15.130058 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.130226 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:26:15.130243 2151869 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:26:15.130262 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.131335 2151869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:26:15.131652 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45447
	I0819 13:26:15.132227 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.132570 2151869 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.132586 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:26:15.132605 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.133517 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134218 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.134244 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.134600 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.134738 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.134755 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.134852 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.135030 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.135097 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.135284 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.135673 2151869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 13:26:15.135702 2151869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 13:26:15.136748 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137177 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.137197 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.137442 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.137613 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.137765 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.137924 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.154937 2151869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0819 13:26:15.155407 2151869 main.go:141] libmachine: () Calling .GetVersion
	I0819 13:26:15.155835 2151869 main.go:141] libmachine: Using API Version  1
	I0819 13:26:15.155847 2151869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 13:26:15.156238 2151869 main.go:141] libmachine: () Calling .GetMachineName
	I0819 13:26:15.156416 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetState
	I0819 13:26:15.158575 2151869 main.go:141] libmachine: (no-preload-441627) Calling .DriverName
	I0819 13:26:15.158795 2151869 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.158805 2151869 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:26:15.158818 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHHostname
	I0819 13:26:15.162078 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162519 2151869 main.go:141] libmachine: (no-preload-441627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:93:04", ip: ""} in network mk-no-preload-441627: {Iface:virbr3 ExpiryTime:2024-08-19 14:20:46 +0000 UTC Type:0 Mac:52:54:00:f8:93:04 Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-441627 Clientid:01:52:54:00:f8:93:04}
	I0819 13:26:15.162534 2151869 main.go:141] libmachine: (no-preload-441627) DBG | domain no-preload-441627 has defined IP address 192.168.50.161 and MAC address 52:54:00:f8:93:04 in network mk-no-preload-441627
	I0819 13:26:15.162825 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHPort
	I0819 13:26:15.162977 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHKeyPath
	I0819 13:26:15.163114 2151869 main.go:141] libmachine: (no-preload-441627) Calling .GetSSHUsername
	I0819 13:26:15.163198 2151869 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/no-preload-441627/id_rsa Username:docker}
	I0819 13:26:15.309062 2151869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:26:15.341525 2151869 node_ready.go:35] waiting up to 6m0s for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354270 2151869 node_ready.go:49] node "no-preload-441627" has status "Ready":"True"
	I0819 13:26:15.354307 2151869 node_ready.go:38] duration metric: took 12.742932ms for node "no-preload-441627" to be "Ready" ...
	I0819 13:26:15.354320 2151869 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:15.363385 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:15.427138 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:26:15.429228 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:26:15.517567 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:26:15.517597 2151869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:26:15.626356 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:26:15.626393 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:26:15.677045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677072 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677414 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677471 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.677483 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.677490 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.677729 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.677749 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.683924 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:15.683943 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:15.684224 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:15.684249 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:15.684252 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:15.701949 2151869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:15.701980 2151869 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:26:15.743920 2151869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:26:16.110007 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110035 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110393 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110415 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.110426 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.110439 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.110691 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.110712 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426045 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426076 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426617 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.426639 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.426650 2151869 main.go:141] libmachine: Making call to close driver server
	I0819 13:26:16.426660 2151869 main.go:141] libmachine: (no-preload-441627) Calling .Close
	I0819 13:26:16.426669 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426948 2151869 main.go:141] libmachine: (no-preload-441627) DBG | Closing plugin on server side
	I0819 13:26:16.426994 2151869 main.go:141] libmachine: Successfully made call to close driver server
	I0819 13:26:16.427011 2151869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 13:26:16.427024 2151869 addons.go:475] Verifying addon metrics-server=true in "no-preload-441627"
	I0819 13:26:16.428916 2151869 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 13:26:16.430020 2151869 addons.go:510] duration metric: took 1.347734507s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 13:26:17.369079 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:19.369662 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:21.369721 2151869 pod_ready.go:103] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"False"
	I0819 13:26:23.368863 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.368886 2151869 pod_ready.go:82] duration metric: took 8.005467546s for pod "coredns-6f6b679f8f-mvn98" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.368895 2151869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373851 2151869 pod_ready.go:93] pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.373873 2151869 pod_ready.go:82] duration metric: took 4.971429ms for pod "coredns-6f6b679f8f-p6zhv" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.373884 2151869 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378381 2151869 pod_ready.go:93] pod "etcd-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.378405 2151869 pod_ready.go:82] duration metric: took 4.513601ms for pod "etcd-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.378417 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382700 2151869 pod_ready.go:93] pod "kube-apiserver-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.382719 2151869 pod_ready.go:82] duration metric: took 4.295583ms for pod "kube-apiserver-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.382728 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387145 2151869 pod_ready.go:93] pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.387168 2151869 pod_ready.go:82] duration metric: took 4.434097ms for pod "kube-controller-manager-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.387179 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768694 2151869 pod_ready.go:93] pod "kube-proxy-9jhss" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:23.768722 2151869 pod_ready.go:82] duration metric: took 381.534902ms for pod "kube-proxy-9jhss" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:23.768735 2151869 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167861 2151869 pod_ready.go:93] pod "kube-scheduler-no-preload-441627" in "kube-system" namespace has status "Ready":"True"
	I0819 13:26:24.167888 2151869 pod_ready.go:82] duration metric: took 399.144976ms for pod "kube-scheduler-no-preload-441627" in "kube-system" namespace to be "Ready" ...
	I0819 13:26:24.167896 2151869 pod_ready.go:39] duration metric: took 8.81356327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:26:24.167913 2151869 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:26:24.167967 2151869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:26:24.187913 2151869 api_server.go:72] duration metric: took 9.105632249s to wait for apiserver process to appear ...
	I0819 13:26:24.187938 2151869 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:26:24.187960 2151869 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0819 13:26:24.192758 2151869 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0819 13:26:24.193797 2151869 api_server.go:141] control plane version: v1.31.0
	I0819 13:26:24.193822 2151869 api_server.go:131] duration metric: took 5.87693ms to wait for apiserver health ...
	I0819 13:26:24.193833 2151869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:26:24.371261 2151869 system_pods.go:59] 9 kube-system pods found
	I0819 13:26:24.371291 2151869 system_pods.go:61] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.371296 2151869 system_pods.go:61] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.371300 2151869 system_pods.go:61] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.371304 2151869 system_pods.go:61] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.371308 2151869 system_pods.go:61] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.371312 2151869 system_pods.go:61] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.371317 2151869 system_pods.go:61] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.371325 2151869 system_pods.go:61] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.371331 2151869 system_pods.go:61] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.371342 2151869 system_pods.go:74] duration metric: took 177.503046ms to wait for pod list to return data ...
	I0819 13:26:24.371355 2151869 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:26:24.568039 2151869 default_sa.go:45] found service account: "default"
	I0819 13:26:24.568068 2151869 default_sa.go:55] duration metric: took 196.702869ms for default service account to be created ...
	I0819 13:26:24.568078 2151869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:26:24.771705 2151869 system_pods.go:86] 9 kube-system pods found
	I0819 13:26:24.771734 2151869 system_pods.go:89] "coredns-6f6b679f8f-mvn98" [a34ae24b-14c2-4b5a-8936-fd1785e1e26f] Running
	I0819 13:26:24.771741 2151869 system_pods.go:89] "coredns-6f6b679f8f-p6zhv" [74e6b340-a400-4672-a8c2-3e3ffe39019f] Running
	I0819 13:26:24.771745 2151869 system_pods.go:89] "etcd-no-preload-441627" [e773cd57-1e6b-465b-a537-7811f4196083] Running
	I0819 13:26:24.771749 2151869 system_pods.go:89] "kube-apiserver-no-preload-441627" [7694aa7d-d88e-4e98-9a80-a79862b91e59] Running
	I0819 13:26:24.771753 2151869 system_pods.go:89] "kube-controller-manager-no-preload-441627" [9e9f8d32-5ea1-4523-91cb-f2b85bdaff54] Running
	I0819 13:26:24.771756 2151869 system_pods.go:89] "kube-proxy-9jhss" [de8a2831-619c-4651-bb20-b077271795e5] Running
	I0819 13:26:24.771759 2151869 system_pods.go:89] "kube-scheduler-no-preload-441627" [18576e68-b112-4dca-aa9b-7f0a173b19b2] Running
	I0819 13:26:24.771765 2151869 system_pods.go:89] "metrics-server-6867b74b74-rzp96" [de104a11-2674-4f73-875c-33c2a899a89d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:26:24.771770 2151869 system_pods.go:89] "storage-provisioner" [b27ff9d6-b00d-4e31-8f05-60454adbcf44] Running
	I0819 13:26:24.771781 2151869 system_pods.go:126] duration metric: took 203.697331ms to wait for k8s-apps to be running ...
	I0819 13:26:24.771794 2151869 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:26:24.771837 2151869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.788088 2151869 system_svc.go:56] duration metric: took 16.285619ms WaitForService to wait for kubelet
	I0819 13:26:24.788122 2151869 kubeadm.go:582] duration metric: took 9.705841529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:26:24.788157 2151869 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:26:24.968341 2151869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 13:26:24.968377 2151869 node_conditions.go:123] node cpu capacity is 2
	I0819 13:26:24.968392 2151869 node_conditions.go:105] duration metric: took 180.211418ms to run NodePressure ...
	I0819 13:26:24.968409 2151869 start.go:241] waiting for startup goroutines ...
	I0819 13:26:24.968418 2151869 start.go:246] waiting for cluster config update ...
	I0819 13:26:24.968429 2151869 start.go:255] writing updated cluster config ...
	I0819 13:26:24.968747 2151869 ssh_runner.go:195] Run: rm -f paused
	I0819 13:26:25.025595 2151869 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:26:25.027536 2151869 out.go:177] * Done! kubectl is now configured to use "no-preload-441627" cluster and "default" namespace by default
	I0819 13:26:24.177907 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:26:24.178201 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:26:24.178214 2152780 kubeadm.go:310] 
	I0819 13:26:24.178291 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:26:24.178372 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:26:24.178382 2152780 kubeadm.go:310] 
	I0819 13:26:24.178424 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:26:24.178476 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:26:24.178656 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:26:24.178674 2152780 kubeadm.go:310] 
	I0819 13:26:24.178792 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:26:24.178834 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:26:24.178879 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:26:24.178885 2152780 kubeadm.go:310] 
	I0819 13:26:24.179039 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:26:24.179150 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:26:24.179160 2152780 kubeadm.go:310] 
	I0819 13:26:24.179343 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:26:24.179495 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:26:24.179612 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:26:24.179737 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:26:24.179753 2152780 kubeadm.go:310] 
	I0819 13:26:24.180394 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:26:24.180518 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:26:24.180601 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 13:26:24.180756 2152780 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 13:26:24.180807 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 13:26:24.684112 2152780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:26:24.697957 2152780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:26:24.707579 2152780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:26:24.707600 2152780 kubeadm.go:157] found existing configuration files:
	
	I0819 13:26:24.707649 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:26:24.717570 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:26:24.717642 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:26:24.727069 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:26:24.735692 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:26:24.735757 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:26:24.744522 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.753036 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:26:24.753090 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:26:24.763455 2152780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:26:24.772780 2152780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:26:24.772839 2152780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:26:24.782898 2152780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 13:26:24.856625 2152780 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 13:26:24.856732 2152780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:26:25.003965 2152780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:26:25.004118 2152780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:26:25.004294 2152780 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 13:26:25.205556 2152780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:26:25.207347 2152780 out.go:235]   - Generating certificates and keys ...
	I0819 13:26:25.207462 2152780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:26:25.207553 2152780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:26:25.207671 2152780 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 13:26:25.207759 2152780 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 13:26:25.207840 2152780 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 13:26:25.207910 2152780 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 13:26:25.207978 2152780 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 13:26:25.208063 2152780 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 13:26:25.208526 2152780 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 13:26:25.209357 2152780 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 13:26:25.209668 2152780 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 13:26:25.209816 2152780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:26:25.424000 2152780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:26:25.667776 2152780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:26:25.723895 2152780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:26:25.817349 2152780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:26:25.834327 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:26:25.836860 2152780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:26:25.837016 2152780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:26:25.992443 2152780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:26:25.994085 2152780 out.go:235]   - Booting up control plane ...
	I0819 13:26:25.994223 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:26:26.007880 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:26:26.009553 2152780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:26:26.013310 2152780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:26:26.014264 2152780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 13:27:06.016907 2152780 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 13:27:06.017042 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:06.017274 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:11.017468 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:11.017758 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:21.018328 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:21.018585 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:27:41.019394 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:27:41.019632 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018489 2152780 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 13:28:21.018784 2152780 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 13:28:21.018807 2152780 kubeadm.go:310] 
	I0819 13:28:21.018879 2152780 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 13:28:21.018963 2152780 kubeadm.go:310] 		timed out waiting for the condition
	I0819 13:28:21.018971 2152780 kubeadm.go:310] 
	I0819 13:28:21.019000 2152780 kubeadm.go:310] 	This error is likely caused by:
	I0819 13:28:21.019030 2152780 kubeadm.go:310] 		- The kubelet is not running
	I0819 13:28:21.019122 2152780 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 13:28:21.019130 2152780 kubeadm.go:310] 
	I0819 13:28:21.019233 2152780 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 13:28:21.019276 2152780 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 13:28:21.019329 2152780 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 13:28:21.019337 2152780 kubeadm.go:310] 
	I0819 13:28:21.019471 2152780 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 13:28:21.019607 2152780 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 13:28:21.019620 2152780 kubeadm.go:310] 
	I0819 13:28:21.019776 2152780 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 13:28:21.019895 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 13:28:21.020016 2152780 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 13:28:21.020125 2152780 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 13:28:21.020139 2152780 kubeadm.go:310] 
	I0819 13:28:21.020943 2152780 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:28:21.021058 2152780 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 13:28:21.021158 2152780 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 13:28:21.021292 2152780 kubeadm.go:394] duration metric: took 7m56.942541209s to StartCluster
	I0819 13:28:21.021375 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:28:21.021456 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:28:21.068572 2152780 cri.go:89] found id: ""
	I0819 13:28:21.068613 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.068624 2152780 logs.go:278] No container was found matching "kube-apiserver"
	I0819 13:28:21.068633 2152780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 13:28:21.068704 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:28:21.108484 2152780 cri.go:89] found id: ""
	I0819 13:28:21.108514 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.108532 2152780 logs.go:278] No container was found matching "etcd"
	I0819 13:28:21.108540 2152780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 13:28:21.108611 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:28:21.151863 2152780 cri.go:89] found id: ""
	I0819 13:28:21.151894 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.151905 2152780 logs.go:278] No container was found matching "coredns"
	I0819 13:28:21.151914 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:28:21.151992 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:28:21.191844 2152780 cri.go:89] found id: ""
	I0819 13:28:21.191873 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.191883 2152780 logs.go:278] No container was found matching "kube-scheduler"
	I0819 13:28:21.191891 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:28:21.191970 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:28:21.230545 2152780 cri.go:89] found id: ""
	I0819 13:28:21.230580 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.230588 2152780 logs.go:278] No container was found matching "kube-proxy"
	I0819 13:28:21.230595 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:28:21.230650 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:28:21.264157 2152780 cri.go:89] found id: ""
	I0819 13:28:21.264211 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.264225 2152780 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 13:28:21.264233 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 13:28:21.264301 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:28:21.297523 2152780 cri.go:89] found id: ""
	I0819 13:28:21.297558 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.297566 2152780 logs.go:278] No container was found matching "kindnet"
	I0819 13:28:21.297573 2152780 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:28:21.297630 2152780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:28:21.336765 2152780 cri.go:89] found id: ""
	I0819 13:28:21.336793 2152780 logs.go:276] 0 containers: []
	W0819 13:28:21.336802 2152780 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 13:28:21.336815 2152780 logs.go:123] Gathering logs for kubelet ...
	I0819 13:28:21.336839 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:28:21.387668 2152780 logs.go:123] Gathering logs for dmesg ...
	I0819 13:28:21.387711 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:28:21.402131 2152780 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:28:21.402173 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 13:28:21.473587 2152780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 13:28:21.473615 2152780 logs.go:123] Gathering logs for CRI-O ...
	I0819 13:28:21.473632 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 13:28:21.576567 2152780 logs.go:123] Gathering logs for container status ...
	I0819 13:28:21.576610 2152780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 13:28:21.614859 2152780 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 13:28:21.614919 2152780 out.go:270] * 
	W0819 13:28:21.615011 2152780 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.615032 2152780 out.go:270] * 
	W0819 13:28:21.615837 2152780 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:28:21.619494 2152780 out.go:201] 
	W0819 13:28:21.620743 2152780 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 13:28:21.620793 2152780 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 13:28:21.620813 2152780 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 13:28:21.622143 2152780 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.579434339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074812579387150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bdfa4d7f-7cb8-4f92-9201-e45db8ed9df8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.579974178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b006662-44b2-4b9d-a0ae-ed514bf6f32e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.580023700Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b006662-44b2-4b9d-a0ae-ed514bf6f32e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.580063866Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8b006662-44b2-4b9d-a0ae-ed514bf6f32e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.610996398Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2de03e5-9ba5-4f2a-b937-3b260dae8d8f name=/runtime.v1.RuntimeService/Version
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.611070517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2de03e5-9ba5-4f2a-b937-3b260dae8d8f name=/runtime.v1.RuntimeService/Version
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.612674556Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5efe076-af97-4054-ba5d-f746d14b1c5c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.613045710Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074812613020737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5efe076-af97-4054-ba5d-f746d14b1c5c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.613685156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc23272f-dae2-41c3-957c-fe0500c8bd2a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.613770234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc23272f-dae2-41c3-957c-fe0500c8bd2a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.613827232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fc23272f-dae2-41c3-957c-fe0500c8bd2a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.646562842Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0632653-5af6-4be3-9e40-27411a4e4a88 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.646695680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0632653-5af6-4be3-9e40-27411a4e4a88 name=/runtime.v1.RuntimeService/Version
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.648444153Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0cb9f2b-f83b-4d0d-8f07-f72ca57dcfcb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.648966165Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074812648937595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0cb9f2b-f83b-4d0d-8f07-f72ca57dcfcb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.649540715Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a2c8ad0-0852-43f1-b295-a7405b5ed21e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.649648332Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a2c8ad0-0852-43f1-b295-a7405b5ed21e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.649696554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9a2c8ad0-0852-43f1-b295-a7405b5ed21e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.680133242Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00d4cad3-26ac-4a6f-a7e9-91bd96a94e5c name=/runtime.v1.RuntimeService/Version
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.680222188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00d4cad3-26ac-4a6f-a7e9-91bd96a94e5c name=/runtime.v1.RuntimeService/Version
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.681907025Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24f3c394-50fa-482e-9f4c-3b71af224420 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.682339405Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724074812682305777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24f3c394-50fa-482e-9f4c-3b71af224420 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.682976275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2664b677-6d44-4ecd-a858-f817d3b30717 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.683023548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2664b677-6d44-4ecd-a858-f817d3b30717 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 13:40:12 old-k8s-version-620467 crio[650]: time="2024-08-19 13:40:12.683063920Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2664b677-6d44-4ecd-a858-f817d3b30717 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug19 13:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052662] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041462] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.891894] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.505316] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.613375] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.587338] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.059255] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064534] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.190720] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.158664] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.305924] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.524185] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.059446] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.840054] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	[ +13.606049] kauditd_printk_skb: 46 callbacks suppressed
	[Aug19 13:24] systemd-fstab-generator[5082]: Ignoring "noauto" option for root device
	[Aug19 13:26] systemd-fstab-generator[5362]: Ignoring "noauto" option for root device
	[  +0.067230] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:40:12 up 20 min,  0 users,  load average: 0.04, 0.04, 0.02
	Linux old-k8s-version-620467 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 19 13:40:08 old-k8s-version-620467 kubelet[6889]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000b38870)
	Aug 19 13:40:08 old-k8s-version-620467 kubelet[6889]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Aug 19 13:40:08 old-k8s-version-620467 kubelet[6889]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Aug 19 13:40:08 old-k8s-version-620467 kubelet[6889]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Aug 19 13:40:08 old-k8s-version-620467 kubelet[6889]: goroutine 154 [select]:
	Aug 19 13:40:08 old-k8s-version-620467 kubelet[6889]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b57ef0, 0x4f0ac20, 0xc000b08a00, 0x1, 0xc0001020c0)
	Aug 19 13:40:08 old-k8s-version-620467 kubelet[6889]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Aug 19 13:40:08 old-k8s-version-620467 kubelet[6889]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000e1180, 0xc0001020c0)
	Aug 19 13:40:08 old-k8s-version-620467 kubelet[6889]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 19 13:40:08 old-k8s-version-620467 kubelet[6889]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 19 13:40:08 old-k8s-version-620467 kubelet[6889]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 19 13:40:08 old-k8s-version-620467 kubelet[6889]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b22610, 0xc00092baa0)
	Aug 19 13:40:08 old-k8s-version-620467 kubelet[6889]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 19 13:40:08 old-k8s-version-620467 kubelet[6889]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 19 13:40:08 old-k8s-version-620467 kubelet[6889]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 19 13:40:08 old-k8s-version-620467 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 19 13:40:08 old-k8s-version-620467 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 19 13:40:09 old-k8s-version-620467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 143.
	Aug 19 13:40:09 old-k8s-version-620467 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 19 13:40:09 old-k8s-version-620467 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 19 13:40:09 old-k8s-version-620467 kubelet[6898]: I0819 13:40:09.620607    6898 server.go:416] Version: v1.20.0
	Aug 19 13:40:09 old-k8s-version-620467 kubelet[6898]: I0819 13:40:09.620829    6898 server.go:837] Client rotation is on, will bootstrap in background
	Aug 19 13:40:09 old-k8s-version-620467 kubelet[6898]: I0819 13:40:09.622810    6898 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 19 13:40:09 old-k8s-version-620467 kubelet[6898]: W0819 13:40:09.624037    6898 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 19 13:40:09 old-k8s-version-620467 kubelet[6898]: I0819 13:40:09.624184    6898 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-620467 -n old-k8s-version-620467
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-620467 -n old-k8s-version-620467: exit status 2 (240.292585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-620467" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (165.63s)

                                                
                                    

Test pass (244/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.14
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 4.37
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.61
22 TestOffline 114.43
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 128.23
31 TestAddons/serial/GCPAuth/Namespaces 1.32
33 TestAddons/parallel/Registry 19
35 TestAddons/parallel/InspektorGadget 12.05
37 TestAddons/parallel/HelmTiller 9.44
39 TestAddons/parallel/CSI 46.75
40 TestAddons/parallel/Headlamp 17.52
41 TestAddons/parallel/CloudSpanner 5.56
42 TestAddons/parallel/LocalPath 10.16
43 TestAddons/parallel/NvidiaDevicePlugin 5.59
44 TestAddons/parallel/Yakd 12.14
46 TestCertOptions 48.55
47 TestCertExpiration 327.07
49 TestForceSystemdFlag 63.12
50 TestForceSystemdEnv 47.48
52 TestKVMDriverInstallOrUpdate 1.18
56 TestErrorSpam/setup 44.18
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.75
59 TestErrorSpam/pause 1.51
60 TestErrorSpam/unpause 1.74
61 TestErrorSpam/stop 5.44
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 81.02
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 36.83
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.23
73 TestFunctional/serial/CacheCmd/cache/add_local 1.03
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 44.47
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.33
84 TestFunctional/serial/LogsFileCmd 1.43
85 TestFunctional/serial/InvalidService 4.51
87 TestFunctional/parallel/ConfigCmd 0.35
88 TestFunctional/parallel/DashboardCmd 19.8
89 TestFunctional/parallel/DryRun 0.28
90 TestFunctional/parallel/InternationalLanguage 0.18
91 TestFunctional/parallel/StatusCmd 1.03
95 TestFunctional/parallel/ServiceCmdConnect 9.55
96 TestFunctional/parallel/AddonsCmd 0.15
97 TestFunctional/parallel/PersistentVolumeClaim 38.25
99 TestFunctional/parallel/SSHCmd 0.46
100 TestFunctional/parallel/CpCmd 1.65
101 TestFunctional/parallel/MySQL 27.39
102 TestFunctional/parallel/FileSync 0.23
103 TestFunctional/parallel/CertSync 1.41
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
111 TestFunctional/parallel/License 0.2
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.68
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.43
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.48
122 TestFunctional/parallel/ImageCommands/Setup 0.39
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.78
125 TestFunctional/parallel/MountCmd/any-port 8.85
126 TestFunctional/parallel/ProfileCmd/profile_list 0.28
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
128 TestFunctional/parallel/ServiceCmd/DeployApp 11.16
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.09
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.26
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.65
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.79
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.63
144 TestFunctional/parallel/MountCmd/specific-port 2
145 TestFunctional/parallel/MountCmd/VerifyCleanup 0.77
146 TestFunctional/parallel/ServiceCmd/List 0.49
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
149 TestFunctional/parallel/ServiceCmd/Format 0.37
150 TestFunctional/parallel/ServiceCmd/URL 0.34
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 187.73
158 TestMultiControlPlane/serial/DeployApp 4.61
159 TestMultiControlPlane/serial/PingHostFromPods 1.26
160 TestMultiControlPlane/serial/AddWorkerNode 55.03
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
163 TestMultiControlPlane/serial/CopyFile 12.81
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
169 TestMultiControlPlane/serial/DeleteSecondaryNode 17.24
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
172 TestMultiControlPlane/serial/RestartCluster 346.95
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
174 TestMultiControlPlane/serial/AddSecondaryNode 72.84
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
179 TestJSONOutput/start/Command 86.29
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.7
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.61
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 6.65
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.05
208 TestMinikubeProfile 86.1
211 TestMountStart/serial/StartWithMountFirst 30.09
212 TestMountStart/serial/VerifyMountFirst 0.38
213 TestMountStart/serial/StartWithMountSecond 28.01
214 TestMountStart/serial/VerifyMountSecond 0.37
215 TestMountStart/serial/DeleteFirst 0.7
216 TestMountStart/serial/VerifyMountPostDelete 0.36
217 TestMountStart/serial/Stop 1.28
218 TestMountStart/serial/RestartStopped 24.28
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 107.77
223 TestMultiNode/serial/DeployApp2Nodes 4.36
224 TestMultiNode/serial/PingHostFrom2Pods 0.8
225 TestMultiNode/serial/AddNode 48.96
226 TestMultiNode/serial/MultiNodeLabels 0.07
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 7.15
229 TestMultiNode/serial/StopNode 2.33
230 TestMultiNode/serial/StartAfterStop 37.51
232 TestMultiNode/serial/DeleteNode 2.14
234 TestMultiNode/serial/RestartMultiNode 184.01
235 TestMultiNode/serial/ValidateNameConflict 45.02
242 TestScheduledStopUnix 112.14
246 TestRunningBinaryUpgrade 211.11
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 95.34
260 TestNetworkPlugins/group/false 2.95
264 TestNoKubernetes/serial/StartWithStopK8s 38.59
265 TestNoKubernetes/serial/Start 51.95
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
267 TestNoKubernetes/serial/ProfileList 31.64
268 TestNoKubernetes/serial/Stop 1.3
269 TestNoKubernetes/serial/StartNoArgs 21.82
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
271 TestStoppedBinaryUpgrade/Setup 0.56
272 TestStoppedBinaryUpgrade/Upgrade 120.59
281 TestPause/serial/Start 103.5
282 TestStoppedBinaryUpgrade/MinikubeLogs 0.87
283 TestNetworkPlugins/group/auto/Start 88.12
284 TestNetworkPlugins/group/kindnet/Start 87.18
286 TestNetworkPlugins/group/auto/KubeletFlags 0.24
287 TestNetworkPlugins/group/auto/NetCatPod 10.32
288 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
289 TestNetworkPlugins/group/auto/DNS 0.18
290 TestNetworkPlugins/group/auto/Localhost 0.14
291 TestNetworkPlugins/group/auto/HairPin 0.16
292 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
293 TestNetworkPlugins/group/kindnet/NetCatPod 11.22
294 TestNetworkPlugins/group/kindnet/DNS 0.17
295 TestNetworkPlugins/group/kindnet/Localhost 0.14
296 TestNetworkPlugins/group/kindnet/HairPin 0.13
297 TestNetworkPlugins/group/calico/Start 77.57
298 TestNetworkPlugins/group/custom-flannel/Start 79.5
299 TestNetworkPlugins/group/enable-default-cni/Start 91.93
300 TestNetworkPlugins/group/flannel/Start 124.39
301 TestNetworkPlugins/group/calico/ControllerPod 6.01
302 TestNetworkPlugins/group/calico/KubeletFlags 0.33
303 TestNetworkPlugins/group/calico/NetCatPod 13.65
304 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
305 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.26
306 TestNetworkPlugins/group/calico/DNS 0.16
307 TestNetworkPlugins/group/calico/Localhost 0.16
308 TestNetworkPlugins/group/calico/HairPin 0.15
309 TestNetworkPlugins/group/custom-flannel/DNS 0.16
310 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
311 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
312 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
313 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.31
314 TestNetworkPlugins/group/bridge/Start 60.62
317 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
318 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
319 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
321 TestStartStop/group/no-preload/serial/FirstStart 109.19
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
324 TestNetworkPlugins/group/flannel/NetCatPod 10.24
325 TestNetworkPlugins/group/flannel/DNS 0.15
326 TestNetworkPlugins/group/flannel/Localhost 0.14
327 TestNetworkPlugins/group/flannel/HairPin 0.15
328 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
329 TestNetworkPlugins/group/bridge/NetCatPod 13.49
331 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 103.3
332 TestNetworkPlugins/group/bridge/DNS 21.06
333 TestNetworkPlugins/group/bridge/Localhost 0.17
334 TestNetworkPlugins/group/bridge/HairPin 0.16
336 TestStartStop/group/newest-cni/serial/FirstStart 51.99
337 TestStartStop/group/no-preload/serial/DeployApp 10.33
338 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
342 TestStartStop/group/newest-cni/serial/Stop 7.35
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
344 TestStartStop/group/newest-cni/serial/SecondStart 36.4
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.3
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
351 TestStartStop/group/newest-cni/serial/Pause 2.38
353 TestStartStop/group/embed-certs/serial/FirstStart 81.56
355 TestStartStop/group/embed-certs/serial/DeployApp 8.29
356 TestStartStop/group/no-preload/serial/SecondStart 683.93
359 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 536.84
363 TestStartStop/group/old-k8s-version/serial/Stop 5.31
364 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
367 TestStartStop/group/embed-certs/serial/SecondStart 487.3
x
+
TestDownloadOnly/v1.20.0/json-events (7.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-919545 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-919545 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.143813745s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-919545
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-919545: exit status 85 (60.431043ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-919545 | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC |          |
	|         | -p download-only-919545        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:56:39
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:56:39.441948 2094204 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:56:39.442222 2094204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:39.442233 2094204 out.go:358] Setting ErrFile to fd 2...
	I0819 11:56:39.442237 2094204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:39.442440 2094204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	W0819 11:56:39.442608 2094204 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19479-2086935/.minikube/config/config.json: open /home/jenkins/minikube-integration/19479-2086935/.minikube/config/config.json: no such file or directory
	I0819 11:56:39.443231 2094204 out.go:352] Setting JSON to true
	I0819 11:56:39.444220 2094204 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":160749,"bootTime":1723907850,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:56:39.444284 2094204 start.go:139] virtualization: kvm guest
	I0819 11:56:39.446443 2094204 out.go:97] [download-only-919545] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0819 11:56:39.446541 2094204 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 11:56:39.446600 2094204 notify.go:220] Checking for updates...
	I0819 11:56:39.447705 2094204 out.go:169] MINIKUBE_LOCATION=19479
	I0819 11:56:39.448774 2094204 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:56:39.449882 2094204 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 11:56:39.451022 2094204 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 11:56:39.452038 2094204 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0819 11:56:39.454070 2094204 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 11:56:39.454292 2094204 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:56:39.486215 2094204 out.go:97] Using the kvm2 driver based on user configuration
	I0819 11:56:39.486245 2094204 start.go:297] selected driver: kvm2
	I0819 11:56:39.486264 2094204 start.go:901] validating driver "kvm2" against <nil>
	I0819 11:56:39.486597 2094204 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:39.486677 2094204 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19479-2086935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 11:56:39.501993 2094204 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 11:56:39.502068 2094204 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:56:39.502742 2094204 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0819 11:56:39.502937 2094204 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:56:39.503015 2094204 cni.go:84] Creating CNI manager for ""
	I0819 11:56:39.503032 2094204 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 11:56:39.503041 2094204 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:56:39.503114 2094204 start.go:340] cluster config:
	{Name:download-only-919545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-919545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:56:39.503366 2094204 iso.go:125] acquiring lock: {Name:mk187c0515de2dd368378ad9b8b97e5033264f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:39.505059 2094204 out.go:97] Downloading VM boot image ...
	I0819 11:56:39.505101 2094204 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 11:56:42.182045 2094204 out.go:97] Starting "download-only-919545" primary control-plane node in "download-only-919545" cluster
	I0819 11:56:42.182066 2094204 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 11:56:42.208055 2094204 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 11:56:42.208096 2094204 cache.go:56] Caching tarball of preloaded images
	I0819 11:56:42.208273 2094204 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 11:56:42.209756 2094204 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 11:56:42.209780 2094204 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 11:56:42.232339 2094204 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19479-2086935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-919545 host does not exist
	  To start a cluster, run: "minikube start -p download-only-919545"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-919545
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (4.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-096642 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-096642 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.370952996s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (4.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-096642
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-096642: exit status 85 (57.595536ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-919545 | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC |                     |
	|         | -p download-only-919545        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC | 19 Aug 24 11:56 UTC |
	| delete  | -p download-only-919545        | download-only-919545 | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC | 19 Aug 24 11:56 UTC |
	| start   | -o=json --download-only        | download-only-096642 | jenkins | v1.33.1 | 19 Aug 24 11:56 UTC |                     |
	|         | -p download-only-096642        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:56:46
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:56:46.899299 2094410 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:56:46.899778 2094410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:46.899797 2094410 out.go:358] Setting ErrFile to fd 2...
	I0819 11:56:46.899805 2094410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:46.900321 2094410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 11:56:46.901275 2094410 out.go:352] Setting JSON to true
	I0819 11:56:46.902196 2094410 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":160757,"bootTime":1723907850,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:56:46.902251 2094410 start.go:139] virtualization: kvm guest
	I0819 11:56:46.903901 2094410 out.go:97] [download-only-096642] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 11:56:46.904069 2094410 notify.go:220] Checking for updates...
	I0819 11:56:46.905109 2094410 out.go:169] MINIKUBE_LOCATION=19479
	I0819 11:56:46.906252 2094410 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:56:46.907316 2094410 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 11:56:46.908455 2094410 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 11:56:46.909532 2094410 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-096642 host does not exist
	  To start a cluster, run: "minikube start -p download-only-096642"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-096642
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-869957 --alsologtostderr --binary-mirror http://127.0.0.1:44967 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-869957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-869957
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (114.43s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-209114 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-209114 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m53.550604865s)
helpers_test.go:175: Cleaning up "offline-crio-209114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-209114
--- PASS: TestOffline (114.43s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-442145
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-442145: exit status 85 (51.912526ms)

                                                
                                                
-- stdout --
	* Profile "addons-442145" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-442145"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-442145
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-442145: exit status 85 (52.771525ms)

                                                
                                                
-- stdout --
	* Profile "addons-442145" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-442145"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (128.23s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-442145 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-442145 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m8.225746924s)
--- PASS: TestAddons/Setup (128.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (1.32s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-442145 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-442145 get secret gcp-auth -n new-namespace
addons_test.go:670: (dbg) Non-zero exit: kubectl --context addons-442145 get secret gcp-auth -n new-namespace: exit status 1 (84.991746ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:662: (dbg) Run:  kubectl --context addons-442145 logs -l app=gcp-auth -n gcp-auth
addons_test.go:670: (dbg) Run:  kubectl --context addons-442145 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (1.32s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.109706ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-dqqsf" [2c0f4d4e-c475-4bda-a9c5-8958f5c9f2f7] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003085737s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kfv85" [56d07c49-4b74-4b80-94c5-73b9277814b3] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004868641s
addons_test.go:342: (dbg) Run:  kubectl --context addons-442145 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-442145 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-442145 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.223405653s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 ip
2024/08/19 11:59:43 [DEBUG] GET http://192.168.39.65:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.00s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.05s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-glw86" [78b1463e-3bbb-418f-8fae-6aac4714c89f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005444613s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-442145
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-442145: (6.042940069s)
--- PASS: TestAddons/parallel/InspektorGadget (12.05s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.44s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.247746ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-97zl7" [246084b2-d891-47a5-b87d-17cbe04e45e7] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004395761s
addons_test.go:475: (dbg) Run:  kubectl --context addons-442145 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-442145 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.850153347s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.44s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.463526ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-442145 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-442145 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d8c68717-4edd-4d7e-9111-4ce3018d18da] Pending
helpers_test.go:344: "task-pv-pod" [d8c68717-4edd-4d7e-9111-4ce3018d18da] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d8c68717-4edd-4d7e-9111-4ce3018d18da] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004283616s
addons_test.go:590: (dbg) Run:  kubectl --context addons-442145 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-442145 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-442145 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-442145 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-442145 delete pod task-pv-pod: (1.096337733s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-442145 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-442145 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-442145 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e0a1d738-e6b1-4690-a894-122e99e54bd3] Pending
helpers_test.go:344: "task-pv-pod-restore" [e0a1d738-e6b1-4690-a894-122e99e54bd3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e0a1d738-e6b1-4690-a894-122e99e54bd3] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003061394s
addons_test.go:632: (dbg) Run:  kubectl --context addons-442145 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-442145 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-442145 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-442145 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.732722757s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.75s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-442145 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-cr25w" [9fc57eef-79d0-4b09-b9ca-8bfd8ff0c389] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-cr25w" [9fc57eef-79d0-4b09-b9ca-8bfd8ff0c389] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-cr25w" [9fc57eef-79d0-4b09-b9ca-8bfd8ff0c389] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004093622s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-442145 addons disable headlamp --alsologtostderr -v=1: (5.664213904s)
--- PASS: TestAddons/parallel/Headlamp (17.52s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-7kcch" [339bc451-4201-4e15-80e2-672b470ffb25] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010163057s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-442145
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.16s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-442145 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-442145 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442145 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fb3a8900-cdd3-4a83-a3b5-5e08ee9e4663] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fb3a8900-cdd3-4a83-a3b5-5e08ee9e4663] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fb3a8900-cdd3-4a83-a3b5-5e08ee9e4663] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00503912s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-442145 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 ssh "cat /opt/local-path-provisioner/pvc-6db4af90-c59c-4e3b-9fe3-ca012679ce85_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-442145 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-442145 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.16s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mtvp4" [94c95122-aa0e-4340-8546-8a6b1539fa00] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004197848s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-442145
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-fnwkb" [310fe85f-6cd8-4f1e-98e3-37af4292866c] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004750759s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-442145 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-442145 addons disable yakd --alsologtostderr -v=1: (6.134321419s)
--- PASS: TestAddons/parallel/Yakd (12.14s)

                                                
                                    
x
+
TestCertOptions (48.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-789684 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-789684 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (47.307553228s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-789684 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-789684 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-789684 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-789684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-789684
--- PASS: TestCertOptions (48.55s)

                                                
                                    
x
+
TestCertExpiration (327.07s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-405492 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0819 13:01:26.667682 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-405492 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m34.995588379s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-405492 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-405492 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (50.937709091s)
helpers_test.go:175: Cleaning up "cert-expiration-405492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-405492
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-405492: (1.134967685s)
--- PASS: TestCertExpiration (327.07s)

                                                
                                    
x
+
TestForceSystemdFlag (63.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-481818 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0819 13:04:02.188669 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-481818 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m1.879335011s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-481818 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-481818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-481818
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-481818: (1.043432546s)
--- PASS: TestForceSystemdFlag (63.12s)

                                                
                                    
x
+
TestForceSystemdEnv (47.48s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-344812 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-344812 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.208285531s)
helpers_test.go:175: Cleaning up "force-systemd-env-344812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-344812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-344812: (1.269397732s)
--- PASS: TestForceSystemdEnv (47.48s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.18s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.18s)

                                                
                                    
x
+
TestErrorSpam/setup (44.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-055769 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-055769 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-055769 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-055769 --driver=kvm2  --container-runtime=crio: (44.183604177s)
--- PASS: TestErrorSpam/setup (44.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (5.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 stop: (2.298816775s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 stop: (1.876599298s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-055769 --log_dir /tmp/nospam-055769 stop: (1.262067204s)
--- PASS: TestErrorSpam/stop (5.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19479-2086935/.minikube/files/etc/test/nested/copy/2094193/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-102083 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0819 12:09:02.189340 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:09:02.196550 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:09:02.207994 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:09:02.229436 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:09:02.271017 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:09:02.352531 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:09:02.514109 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:09:02.835835 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:09:03.478169 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:09:04.759663 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:09:07.321925 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:09:12.443790 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:09:22.685458 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:09:43.167288 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-102083 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m21.015450023s)
--- PASS: TestFunctional/serial/StartWithProxy (81.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-102083 --alsologtostderr -v=8
E0819 12:10:24.129678 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-102083 --alsologtostderr -v=8: (36.830879195s)
functional_test.go:663: soft start took 36.831817642s for "functional-102083" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-102083 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-102083 cache add registry.k8s.io/pause:3.1: (1.050749674s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-102083 cache add registry.k8s.io/pause:3.3: (1.109848308s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-102083 cache add registry.k8s.io/pause:latest: (1.07242942s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-102083 /tmp/TestFunctionalserialCacheCmdcacheadd_local1482458097/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 cache add minikube-local-cache-test:functional-102083
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 cache delete minikube-local-cache-test:functional-102083
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-102083
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102083 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (211.603928ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 kubectl -- --context functional-102083 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-102083 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.47s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-102083 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-102083 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.466879667s)
functional_test.go:761: restart took 44.467023692s for "functional-102083" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.47s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-102083 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-102083 logs: (1.330915905s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 logs --file /tmp/TestFunctionalserialLogsFileCmd771585158/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-102083 logs --file /tmp/TestFunctionalserialLogsFileCmd771585158/001/logs.txt: (1.427729037s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.51s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-102083 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-102083
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-102083: exit status 115 (282.942028ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.110:32413 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-102083 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-102083 delete -f testdata/invalidsvc.yaml: (1.02346719s)
--- PASS: TestFunctional/serial/InvalidService (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102083 config get cpus: exit status 14 (58.010403ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102083 config get cpus: exit status 14 (53.617508ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-102083 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-102083 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2103616: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.80s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-102083 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-102083 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (132.911966ms)

                                                
                                                
-- stdout --
	* [functional-102083] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:11:46.809001 2103685 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:11:46.809157 2103685 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:11:46.809210 2103685 out.go:358] Setting ErrFile to fd 2...
	I0819 12:11:46.809233 2103685 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:11:46.809702 2103685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:11:46.810604 2103685 out.go:352] Setting JSON to false
	I0819 12:11:46.812009 2103685 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":161657,"bootTime":1723907850,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:11:46.812088 2103685 start.go:139] virtualization: kvm guest
	I0819 12:11:46.813973 2103685 out.go:177] * [functional-102083] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:11:46.815298 2103685 notify.go:220] Checking for updates...
	I0819 12:11:46.815328 2103685 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 12:11:46.816595 2103685 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:11:46.817892 2103685 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:11:46.819015 2103685 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:11:46.820023 2103685 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:11:46.821083 2103685 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:11:46.822699 2103685 config.go:182] Loaded profile config "functional-102083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:11:46.823324 2103685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:11:46.823386 2103685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:11:46.838961 2103685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32901
	I0819 12:11:46.839367 2103685 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:11:46.840025 2103685 main.go:141] libmachine: Using API Version  1
	I0819 12:11:46.840061 2103685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:11:46.840451 2103685 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:11:46.840676 2103685 main.go:141] libmachine: (functional-102083) Calling .DriverName
	I0819 12:11:46.840945 2103685 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:11:46.841241 2103685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:11:46.841288 2103685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:11:46.856416 2103685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36643
	I0819 12:11:46.856803 2103685 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:11:46.857283 2103685 main.go:141] libmachine: Using API Version  1
	I0819 12:11:46.857303 2103685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:11:46.857631 2103685 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:11:46.857806 2103685 main.go:141] libmachine: (functional-102083) Calling .DriverName
	I0819 12:11:46.889910 2103685 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 12:11:46.890935 2103685 start.go:297] selected driver: kvm2
	I0819 12:11:46.890955 2103685 start.go:901] validating driver "kvm2" against &{Name:functional-102083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-102083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:11:46.891084 2103685 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:11:46.893136 2103685 out.go:201] 
	W0819 12:11:46.894320 2103685 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 12:11:46.895340 2103685 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-102083 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-102083 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-102083 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (175.912022ms)

                                                
                                                
-- stdout --
	* [functional-102083] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:11:25.313884 2101669 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:11:25.313994 2101669 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:11:25.314005 2101669 out.go:358] Setting ErrFile to fd 2...
	I0819 12:11:25.314011 2101669 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:11:25.314323 2101669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:11:25.314906 2101669 out.go:352] Setting JSON to false
	I0819 12:11:25.315897 2101669 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":161635,"bootTime":1723907850,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:11:25.315971 2101669 start.go:139] virtualization: kvm guest
	I0819 12:11:25.317867 2101669 out.go:177] * [functional-102083] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0819 12:11:25.319011 2101669 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 12:11:25.319024 2101669 notify.go:220] Checking for updates...
	I0819 12:11:25.321111 2101669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:11:25.322064 2101669 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 12:11:25.323185 2101669 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 12:11:25.324291 2101669 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:11:25.325379 2101669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:11:25.326868 2101669 config.go:182] Loaded profile config "functional-102083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:11:25.327423 2101669 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:11:25.327509 2101669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:11:25.348140 2101669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45957
	I0819 12:11:25.348819 2101669 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:11:25.349385 2101669 main.go:141] libmachine: Using API Version  1
	I0819 12:11:25.349411 2101669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:11:25.349782 2101669 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:11:25.350009 2101669 main.go:141] libmachine: (functional-102083) Calling .DriverName
	I0819 12:11:25.352853 2101669 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:11:25.353780 2101669 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:11:25.357462 2101669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:11:25.378656 2101669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
	I0819 12:11:25.379028 2101669 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:11:25.379598 2101669 main.go:141] libmachine: Using API Version  1
	I0819 12:11:25.379616 2101669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:11:25.379941 2101669 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:11:25.380085 2101669 main.go:141] libmachine: (functional-102083) Calling .DriverName
	I0819 12:11:25.427400 2101669 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0819 12:11:25.428787 2101669 start.go:297] selected driver: kvm2
	I0819 12:11:25.428817 2101669 start.go:901] validating driver "kvm2" against &{Name:functional-102083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-102083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:11:25.428943 2101669 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:11:25.430823 2101669 out.go:201] 
	W0819 12:11:25.431743 2101669 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 12:11:25.432837 2101669 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-102083 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-102083 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-fz9b8" [36bb31ec-384e-494e-9df2-88dc5989244c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-fz9b8" [36bb31ec-384e-494e-9df2-88dc5989244c] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004373077s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.110:30987
functional_test.go:1675: http://192.168.39.110:30987: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-fz9b8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.110:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.110:30987
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8a66c24e-4bcd-41b1-b30c-da9519f6b815] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004298926s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-102083 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-102083 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-102083 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-102083 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [080056d6-24e1-4f1e-b46d-bf4ba73c20ec] Pending
helpers_test.go:344: "sp-pod" [080056d6-24e1-4f1e-b46d-bf4ba73c20ec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [080056d6-24e1-4f1e-b46d-bf4ba73c20ec] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.178501434s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-102083 exec sp-pod -- touch /tmp/mount/foo
E0819 12:11:46.051874 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-102083 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-102083 delete -f testdata/storage-provisioner/pod.yaml: (5.100095186s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-102083 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b81fb766-2b9e-4c4a-a5f2-ce571a944bd5] Pending
helpers_test.go:344: "sp-pod" [b81fb766-2b9e-4c4a-a5f2-ce571a944bd5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b81fb766-2b9e-4c4a-a5f2-ce571a944bd5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004759531s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-102083 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.25s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh -n functional-102083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 cp functional-102083:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2755884963/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh -n functional-102083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh -n functional-102083 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-102083 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-bkf9b" [c9b48756-2415-4f0e-a245-5ba1cdddaaaa] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-bkf9b" [c9b48756-2415-4f0e-a245-5ba1cdddaaaa] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.40783746s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-102083 exec mysql-6cdb49bbb-bkf9b -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-102083 exec mysql-6cdb49bbb-bkf9b -- mysql -ppassword -e "show databases;": exit status 1 (253.397546ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-102083 exec mysql-6cdb49bbb-bkf9b -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-102083 exec mysql-6cdb49bbb-bkf9b -- mysql -ppassword -e "show databases;": exit status 1 (610.759423ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-102083 exec mysql-6cdb49bbb-bkf9b -- mysql -ppassword -e "show databases;"
2024/08/19 12:12:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (27.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/2094193/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "sudo cat /etc/test/nested/copy/2094193/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/2094193.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "sudo cat /etc/ssl/certs/2094193.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/2094193.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "sudo cat /usr/share/ca-certificates/2094193.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/20941932.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "sudo cat /etc/ssl/certs/20941932.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/20941932.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "sudo cat /usr/share/ca-certificates/20941932.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-102083 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102083 ssh "sudo systemctl is-active docker": exit status 1 (225.051113ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102083 ssh "sudo systemctl is-active containerd": exit status 1 (221.162253ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-102083 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-102083  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-102083  | ea99f5cfcfd9e | 3.33kB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-102083 image ls --format table --alsologtostderr:
I0819 12:11:50.550360 2103838 out.go:345] Setting OutFile to fd 1 ...
I0819 12:11:50.550506 2103838 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:11:50.550520 2103838 out.go:358] Setting ErrFile to fd 2...
I0819 12:11:50.550527 2103838 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:11:50.550841 2103838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
I0819 12:11:50.551693 2103838 config.go:182] Loaded profile config "functional-102083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:11:50.551829 2103838 config.go:182] Loaded profile config "functional-102083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:11:50.552247 2103838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 12:11:50.552297 2103838 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 12:11:50.569078 2103838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32945
I0819 12:11:50.569557 2103838 main.go:141] libmachine: () Calling .GetVersion
I0819 12:11:50.570144 2103838 main.go:141] libmachine: Using API Version  1
I0819 12:11:50.570166 2103838 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 12:11:50.570560 2103838 main.go:141] libmachine: () Calling .GetMachineName
I0819 12:11:50.570805 2103838 main.go:141] libmachine: (functional-102083) Calling .GetState
I0819 12:11:50.572790 2103838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 12:11:50.572830 2103838 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 12:11:50.588048 2103838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
I0819 12:11:50.588522 2103838 main.go:141] libmachine: () Calling .GetVersion
I0819 12:11:50.589100 2103838 main.go:141] libmachine: Using API Version  1
I0819 12:11:50.589122 2103838 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 12:11:50.589478 2103838 main.go:141] libmachine: () Calling .GetMachineName
I0819 12:11:50.589684 2103838 main.go:141] libmachine: (functional-102083) Calling .DriverName
I0819 12:11:50.589913 2103838 ssh_runner.go:195] Run: systemctl --version
I0819 12:11:50.589959 2103838 main.go:141] libmachine: (functional-102083) Calling .GetSSHHostname
I0819 12:11:50.592793 2103838 main.go:141] libmachine: (functional-102083) DBG | domain functional-102083 has defined MAC address 52:54:00:4e:6e:e6 in network mk-functional-102083
I0819 12:11:50.593216 2103838 main.go:141] libmachine: (functional-102083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:e6", ip: ""} in network mk-functional-102083: {Iface:virbr1 ExpiryTime:2024-08-19 13:08:43 +0000 UTC Type:0 Mac:52:54:00:4e:6e:e6 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:functional-102083 Clientid:01:52:54:00:4e:6e:e6}
I0819 12:11:50.593257 2103838 main.go:141] libmachine: (functional-102083) DBG | domain functional-102083 has defined IP address 192.168.39.110 and MAC address 52:54:00:4e:6e:e6 in network mk-functional-102083
I0819 12:11:50.593341 2103838 main.go:141] libmachine: (functional-102083) Calling .GetSSHPort
I0819 12:11:50.593476 2103838 main.go:141] libmachine: (functional-102083) Calling .GetSSHKeyPath
I0819 12:11:50.593653 2103838 main.go:141] libmachine: (functional-102083) Calling .GetSSHUsername
I0819 12:11:50.593812 2103838 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/functional-102083/id_rsa Username:docker}
I0819 12:11:50.690046 2103838 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 12:11:50.745633 2103838 main.go:141] libmachine: Making call to close driver server
I0819 12:11:50.745658 2103838 main.go:141] libmachine: (functional-102083) Calling .Close
I0819 12:11:50.746005 2103838 main.go:141] libmachine: Successfully made call to close driver server
I0819 12:11:50.746034 2103838 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 12:11:50.746044 2103838 main.go:141] libmachine: Making call to close driver server
I0819 12:11:50.746052 2103838 main.go:141] libmachine: (functional-102083) Calling .Close
I0819 12:11:50.746303 2103838 main.go:141] libmachine: Successfully made call to close driver server
I0819 12:11:50.746329 2103838 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 12:11:50.746356 2103838 main.go:141] libmachine: (functional-102083) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-102083 image ls --format json --alsologtostderr:
[{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ea99f5cfcfd9ecbde213f247a9356e3bc070da11faab25f211418cc779ee69f1","repoDigests":["localhost/minikube-local-cache-test@sha256:05ac9fad5a28217f4cf3c2569201428b3c9bb5c2d23309cdb3d564252d94e404"],"repoTags":["localhost/minikube-local-cache-test:functional-102083"],"size":"3328"},{"id":"604f5db
92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["do
cker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-102083"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"1766f54c897f0e57040741e674146
2f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8
d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/
pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-102083 image ls --format json --alsologtostderr:
I0819 12:11:50.119864 2103814 out.go:345] Setting OutFile to fd 1 ...
I0819 12:11:50.120009 2103814 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:11:50.120020 2103814 out.go:358] Setting ErrFile to fd 2...
I0819 12:11:50.120025 2103814 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:11:50.120216 2103814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
I0819 12:11:50.120860 2103814 config.go:182] Loaded profile config "functional-102083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:11:50.120972 2103814 config.go:182] Loaded profile config "functional-102083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:11:50.121393 2103814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 12:11:50.121441 2103814 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 12:11:50.136734 2103814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34233
I0819 12:11:50.137198 2103814 main.go:141] libmachine: () Calling .GetVersion
I0819 12:11:50.137784 2103814 main.go:141] libmachine: Using API Version  1
I0819 12:11:50.137810 2103814 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 12:11:50.138169 2103814 main.go:141] libmachine: () Calling .GetMachineName
I0819 12:11:50.138429 2103814 main.go:141] libmachine: (functional-102083) Calling .GetState
I0819 12:11:50.140338 2103814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 12:11:50.140389 2103814 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 12:11:50.155369 2103814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36803
I0819 12:11:50.155846 2103814 main.go:141] libmachine: () Calling .GetVersion
I0819 12:11:50.156366 2103814 main.go:141] libmachine: Using API Version  1
I0819 12:11:50.156394 2103814 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 12:11:50.156798 2103814 main.go:141] libmachine: () Calling .GetMachineName
I0819 12:11:50.157035 2103814 main.go:141] libmachine: (functional-102083) Calling .DriverName
I0819 12:11:50.157283 2103814 ssh_runner.go:195] Run: systemctl --version
I0819 12:11:50.157311 2103814 main.go:141] libmachine: (functional-102083) Calling .GetSSHHostname
I0819 12:11:50.160358 2103814 main.go:141] libmachine: (functional-102083) DBG | domain functional-102083 has defined MAC address 52:54:00:4e:6e:e6 in network mk-functional-102083
I0819 12:11:50.160800 2103814 main.go:141] libmachine: (functional-102083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:e6", ip: ""} in network mk-functional-102083: {Iface:virbr1 ExpiryTime:2024-08-19 13:08:43 +0000 UTC Type:0 Mac:52:54:00:4e:6e:e6 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:functional-102083 Clientid:01:52:54:00:4e:6e:e6}
I0819 12:11:50.160829 2103814 main.go:141] libmachine: (functional-102083) DBG | domain functional-102083 has defined IP address 192.168.39.110 and MAC address 52:54:00:4e:6e:e6 in network mk-functional-102083
I0819 12:11:50.160971 2103814 main.go:141] libmachine: (functional-102083) Calling .GetSSHPort
I0819 12:11:50.161153 2103814 main.go:141] libmachine: (functional-102083) Calling .GetSSHKeyPath
I0819 12:11:50.161327 2103814 main.go:141] libmachine: (functional-102083) Calling .GetSSHUsername
I0819 12:11:50.161456 2103814 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/functional-102083/id_rsa Username:docker}
I0819 12:11:50.238441 2103814 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 12:11:50.494727 2103814 main.go:141] libmachine: Making call to close driver server
I0819 12:11:50.494744 2103814 main.go:141] libmachine: (functional-102083) Calling .Close
I0819 12:11:50.495057 2103814 main.go:141] libmachine: Successfully made call to close driver server
I0819 12:11:50.495123 2103814 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 12:11:50.495145 2103814 main.go:141] libmachine: Making call to close driver server
I0819 12:11:50.495157 2103814 main.go:141] libmachine: (functional-102083) Calling .Close
I0819 12:11:50.495441 2103814 main.go:141] libmachine: Successfully made call to close driver server
I0819 12:11:50.495463 2103814 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 12:11:50.495502 2103814 main.go:141] libmachine: (functional-102083) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-102083 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-102083
size: "4943877"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ea99f5cfcfd9ecbde213f247a9356e3bc070da11faab25f211418cc779ee69f1
repoDigests:
- localhost/minikube-local-cache-test@sha256:05ac9fad5a28217f4cf3c2569201428b3c9bb5c2d23309cdb3d564252d94e404
repoTags:
- localhost/minikube-local-cache-test:functional-102083
size: "3328"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-102083 image ls --format yaml --alsologtostderr:
I0819 12:11:50.807728 2103862 out.go:345] Setting OutFile to fd 1 ...
I0819 12:11:50.807880 2103862 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:11:50.807891 2103862 out.go:358] Setting ErrFile to fd 2...
I0819 12:11:50.807897 2103862 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:11:50.808180 2103862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
I0819 12:11:50.808868 2103862 config.go:182] Loaded profile config "functional-102083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:11:50.808982 2103862 config.go:182] Loaded profile config "functional-102083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:11:50.809362 2103862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 12:11:50.809412 2103862 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 12:11:50.824413 2103862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42123
I0819 12:11:50.824869 2103862 main.go:141] libmachine: () Calling .GetVersion
I0819 12:11:50.825466 2103862 main.go:141] libmachine: Using API Version  1
I0819 12:11:50.825490 2103862 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 12:11:50.825859 2103862 main.go:141] libmachine: () Calling .GetMachineName
I0819 12:11:50.826073 2103862 main.go:141] libmachine: (functional-102083) Calling .GetState
I0819 12:11:50.827821 2103862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 12:11:50.827863 2103862 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 12:11:50.843158 2103862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43011
I0819 12:11:50.843577 2103862 main.go:141] libmachine: () Calling .GetVersion
I0819 12:11:50.844046 2103862 main.go:141] libmachine: Using API Version  1
I0819 12:11:50.844068 2103862 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 12:11:50.844379 2103862 main.go:141] libmachine: () Calling .GetMachineName
I0819 12:11:50.844557 2103862 main.go:141] libmachine: (functional-102083) Calling .DriverName
I0819 12:11:50.844741 2103862 ssh_runner.go:195] Run: systemctl --version
I0819 12:11:50.844785 2103862 main.go:141] libmachine: (functional-102083) Calling .GetSSHHostname
I0819 12:11:50.847254 2103862 main.go:141] libmachine: (functional-102083) DBG | domain functional-102083 has defined MAC address 52:54:00:4e:6e:e6 in network mk-functional-102083
I0819 12:11:50.847712 2103862 main.go:141] libmachine: (functional-102083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:e6", ip: ""} in network mk-functional-102083: {Iface:virbr1 ExpiryTime:2024-08-19 13:08:43 +0000 UTC Type:0 Mac:52:54:00:4e:6e:e6 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:functional-102083 Clientid:01:52:54:00:4e:6e:e6}
I0819 12:11:50.847736 2103862 main.go:141] libmachine: (functional-102083) DBG | domain functional-102083 has defined IP address 192.168.39.110 and MAC address 52:54:00:4e:6e:e6 in network mk-functional-102083
I0819 12:11:50.847886 2103862 main.go:141] libmachine: (functional-102083) Calling .GetSSHPort
I0819 12:11:50.848060 2103862 main.go:141] libmachine: (functional-102083) Calling .GetSSHKeyPath
I0819 12:11:50.848237 2103862 main.go:141] libmachine: (functional-102083) Calling .GetSSHUsername
I0819 12:11:50.848402 2103862 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/functional-102083/id_rsa Username:docker}
I0819 12:11:50.933291 2103862 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 12:11:50.985273 2103862 main.go:141] libmachine: Making call to close driver server
I0819 12:11:50.985293 2103862 main.go:141] libmachine: (functional-102083) Calling .Close
I0819 12:11:50.985646 2103862 main.go:141] libmachine: Successfully made call to close driver server
I0819 12:11:50.985657 2103862 main.go:141] libmachine: (functional-102083) DBG | Closing plugin on server side
I0819 12:11:50.985673 2103862 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 12:11:50.985690 2103862 main.go:141] libmachine: Making call to close driver server
I0819 12:11:50.985704 2103862 main.go:141] libmachine: (functional-102083) Calling .Close
I0819 12:11:50.985976 2103862 main.go:141] libmachine: (functional-102083) DBG | Closing plugin on server side
I0819 12:11:50.986012 2103862 main.go:141] libmachine: Successfully made call to close driver server
I0819 12:11:50.986034 2103862 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102083 ssh pgrep buildkitd: exit status 1 (242.364863ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image build -t localhost/my-image:functional-102083 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-102083 image build -t localhost/my-image:functional-102083 testdata/build --alsologtostderr: (2.937665815s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-102083 image build -t localhost/my-image:functional-102083 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f1fbda1a6c8
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-102083
--> 9736a4bfe9b
Successfully tagged localhost/my-image:functional-102083
9736a4bfe9b67de1770ecf153ebcea8d9f1ea95cb549efb6a5132aa5303bb464
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-102083 image build -t localhost/my-image:functional-102083 testdata/build --alsologtostderr:
I0819 12:11:51.285707 2103915 out.go:345] Setting OutFile to fd 1 ...
I0819 12:11:51.286063 2103915 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:11:51.286079 2103915 out.go:358] Setting ErrFile to fd 2...
I0819 12:11:51.286087 2103915 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 12:11:51.286353 2103915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
I0819 12:11:51.287260 2103915 config.go:182] Loaded profile config "functional-102083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:11:51.287991 2103915 config.go:182] Loaded profile config "functional-102083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 12:11:51.288474 2103915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 12:11:51.288530 2103915 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 12:11:51.304635 2103915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43533
I0819 12:11:51.305085 2103915 main.go:141] libmachine: () Calling .GetVersion
I0819 12:11:51.305693 2103915 main.go:141] libmachine: Using API Version  1
I0819 12:11:51.305730 2103915 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 12:11:51.306224 2103915 main.go:141] libmachine: () Calling .GetMachineName
I0819 12:11:51.306438 2103915 main.go:141] libmachine: (functional-102083) Calling .GetState
I0819 12:11:51.308586 2103915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 12:11:51.308633 2103915 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 12:11:51.323847 2103915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44573
I0819 12:11:51.324287 2103915 main.go:141] libmachine: () Calling .GetVersion
I0819 12:11:51.324888 2103915 main.go:141] libmachine: Using API Version  1
I0819 12:11:51.324913 2103915 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 12:11:51.325232 2103915 main.go:141] libmachine: () Calling .GetMachineName
I0819 12:11:51.325427 2103915 main.go:141] libmachine: (functional-102083) Calling .DriverName
I0819 12:11:51.325622 2103915 ssh_runner.go:195] Run: systemctl --version
I0819 12:11:51.325651 2103915 main.go:141] libmachine: (functional-102083) Calling .GetSSHHostname
I0819 12:11:51.328506 2103915 main.go:141] libmachine: (functional-102083) DBG | domain functional-102083 has defined MAC address 52:54:00:4e:6e:e6 in network mk-functional-102083
I0819 12:11:51.328987 2103915 main.go:141] libmachine: (functional-102083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:6e:e6", ip: ""} in network mk-functional-102083: {Iface:virbr1 ExpiryTime:2024-08-19 13:08:43 +0000 UTC Type:0 Mac:52:54:00:4e:6e:e6 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:functional-102083 Clientid:01:52:54:00:4e:6e:e6}
I0819 12:11:51.329019 2103915 main.go:141] libmachine: (functional-102083) DBG | domain functional-102083 has defined IP address 192.168.39.110 and MAC address 52:54:00:4e:6e:e6 in network mk-functional-102083
I0819 12:11:51.329138 2103915 main.go:141] libmachine: (functional-102083) Calling .GetSSHPort
I0819 12:11:51.329305 2103915 main.go:141] libmachine: (functional-102083) Calling .GetSSHKeyPath
I0819 12:11:51.329477 2103915 main.go:141] libmachine: (functional-102083) Calling .GetSSHUsername
I0819 12:11:51.329639 2103915 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/functional-102083/id_rsa Username:docker}
I0819 12:11:51.435190 2103915 build_images.go:161] Building image from path: /tmp/build.531476665.tar
I0819 12:11:51.435280 2103915 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 12:11:51.465591 2103915 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.531476665.tar
I0819 12:11:51.470791 2103915 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.531476665.tar: stat -c "%s %y" /var/lib/minikube/build/build.531476665.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.531476665.tar': No such file or directory
I0819 12:11:51.470818 2103915 ssh_runner.go:362] scp /tmp/build.531476665.tar --> /var/lib/minikube/build/build.531476665.tar (3072 bytes)
I0819 12:11:51.546796 2103915 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.531476665
I0819 12:11:51.580376 2103915 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.531476665 -xf /var/lib/minikube/build/build.531476665.tar
I0819 12:11:51.615216 2103915 crio.go:315] Building image: /var/lib/minikube/build/build.531476665
I0819 12:11:51.615309 2103915 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-102083 /var/lib/minikube/build/build.531476665 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0819 12:11:54.136449 2103915 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-102083 /var/lib/minikube/build/build.531476665 --cgroup-manager=cgroupfs: (2.521104457s)
I0819 12:11:54.136527 2103915 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.531476665
I0819 12:11:54.151627 2103915 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.531476665.tar
I0819 12:11:54.164105 2103915 build_images.go:217] Built localhost/my-image:functional-102083 from /tmp/build.531476665.tar
I0819 12:11:54.164143 2103915 build_images.go:133] succeeded building to: functional-102083
I0819 12:11:54.164150 2103915 build_images.go:134] failed building to: 
I0819 12:11:54.164204 2103915 main.go:141] libmachine: Making call to close driver server
I0819 12:11:54.164222 2103915 main.go:141] libmachine: (functional-102083) Calling .Close
I0819 12:11:54.164541 2103915 main.go:141] libmachine: Successfully made call to close driver server
I0819 12:11:54.164556 2103915 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 12:11:54.164565 2103915 main.go:141] libmachine: Making call to close driver server
I0819 12:11:54.164572 2103915 main.go:141] libmachine: (functional-102083) Calling .Close
I0819 12:11:54.164809 2103915 main.go:141] libmachine: (functional-102083) DBG | Closing plugin on server side
I0819 12:11:54.164829 2103915 main.go:141] libmachine: Successfully made call to close driver server
I0819 12:11:54.164854 2103915 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-102083
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image load --daemon kicbase/echo-server:functional-102083 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-102083 image load --daemon kicbase/echo-server:functional-102083 --alsologtostderr: (1.478587341s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-102083 /tmp/TestFunctionalparallelMountCmdany-port2994088140/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724069485439828200" to /tmp/TestFunctionalparallelMountCmdany-port2994088140/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724069485439828200" to /tmp/TestFunctionalparallelMountCmdany-port2994088140/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724069485439828200" to /tmp/TestFunctionalparallelMountCmdany-port2994088140/001/test-1724069485439828200
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102083 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (242.948446ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 12:11 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 12:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 12:11 test-1724069485439828200
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh cat /mount-9p/test-1724069485439828200
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-102083 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f9b6a166-13de-4838-b071-5a9ef16279b7] Pending
helpers_test.go:344: "busybox-mount" [f9b6a166-13de-4838-b071-5a9ef16279b7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f9b6a166-13de-4838-b071-5a9ef16279b7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f9b6a166-13de-4838-b071-5a9ef16279b7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.014451215s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-102083 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-102083 /tmp/TestFunctionalparallelMountCmdany-port2994088140/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.85s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "231.501722ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "49.312165ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "246.98845ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.142394ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-102083 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-102083 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-nwcgm" [b9dd1d86-f945-4023-bfd8-6fbed19b2f46] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-nwcgm" [b9dd1d86-f945-4023-bfd8-6fbed19b2f46] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003847717s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image load --daemon kicbase/echo-server:functional-102083 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-102083
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image load --daemon kicbase/echo-server:functional-102083 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-102083 image load --daemon kicbase/echo-server:functional-102083 --alsologtostderr: (1.865590307s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image save kicbase/echo-server:functional-102083 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image rm kicbase/echo-server:functional-102083 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-102083
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 image save --daemon kicbase/echo-server:functional-102083 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-102083
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-102083 /tmp/TestFunctionalparallelMountCmdspecific-port245702842/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102083 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (238.442448ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-102083 /tmp/TestFunctionalparallelMountCmdspecific-port245702842/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102083 ssh "sudo umount -f /mount-9p": exit status 1 (238.413369ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-102083 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-102083 /tmp/TestFunctionalparallelMountCmdspecific-port245702842/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-102083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1445387886/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-102083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1445387886/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-102083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1445387886/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-102083 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-102083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1445387886/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-102083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1445387886/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-102083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1445387886/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 service list -o json
functional_test.go:1494: Took "469.032455ms" to run "out/minikube-linux-amd64 -p functional-102083 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.110:30897
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-102083 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.110:30897
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-102083
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-102083
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-102083
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (187.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-584983 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 12:14:02.189331 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:14:29.894140 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-584983 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m7.072906577s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (187.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-584983 -- rollout status deployment/busybox: (2.295075271s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- exec busybox-7dff88458-hxh55 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- exec busybox-7dff88458-s954n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- exec busybox-7dff88458-xjjwj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- exec busybox-7dff88458-hxh55 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- exec busybox-7dff88458-s954n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- exec busybox-7dff88458-xjjwj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- exec busybox-7dff88458-hxh55 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- exec busybox-7dff88458-s954n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- exec busybox-7dff88458-xjjwj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- exec busybox-7dff88458-hxh55 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- exec busybox-7dff88458-hxh55 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- exec busybox-7dff88458-s954n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- exec busybox-7dff88458-s954n -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- exec busybox-7dff88458-xjjwj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-584983 -- exec busybox-7dff88458-xjjwj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-584983 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-584983 -v=7 --alsologtostderr: (54.205941709s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-584983 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp testdata/cp-test.txt ha-584983:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3290211963/001/cp-test_ha-584983.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983:/home/docker/cp-test.txt ha-584983-m02:/home/docker/cp-test_ha-584983_ha-584983-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m02 "sudo cat /home/docker/cp-test_ha-584983_ha-584983-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983:/home/docker/cp-test.txt ha-584983-m03:/home/docker/cp-test_ha-584983_ha-584983-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m03 "sudo cat /home/docker/cp-test_ha-584983_ha-584983-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983:/home/docker/cp-test.txt ha-584983-m04:/home/docker/cp-test_ha-584983_ha-584983-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m04 "sudo cat /home/docker/cp-test_ha-584983_ha-584983-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp testdata/cp-test.txt ha-584983-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3290211963/001/cp-test_ha-584983-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983-m02:/home/docker/cp-test.txt ha-584983:/home/docker/cp-test_ha-584983-m02_ha-584983.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983 "sudo cat /home/docker/cp-test_ha-584983-m02_ha-584983.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983-m02:/home/docker/cp-test.txt ha-584983-m03:/home/docker/cp-test_ha-584983-m02_ha-584983-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m03 "sudo cat /home/docker/cp-test_ha-584983-m02_ha-584983-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983-m02:/home/docker/cp-test.txt ha-584983-m04:/home/docker/cp-test_ha-584983-m02_ha-584983-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m04 "sudo cat /home/docker/cp-test_ha-584983-m02_ha-584983-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp testdata/cp-test.txt ha-584983-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3290211963/001/cp-test_ha-584983-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983-m03:/home/docker/cp-test.txt ha-584983:/home/docker/cp-test_ha-584983-m03_ha-584983.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983 "sudo cat /home/docker/cp-test_ha-584983-m03_ha-584983.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983-m03:/home/docker/cp-test.txt ha-584983-m02:/home/docker/cp-test_ha-584983-m03_ha-584983-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m02 "sudo cat /home/docker/cp-test_ha-584983-m03_ha-584983-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983-m03:/home/docker/cp-test.txt ha-584983-m04:/home/docker/cp-test_ha-584983-m03_ha-584983-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m04 "sudo cat /home/docker/cp-test_ha-584983-m03_ha-584983-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp testdata/cp-test.txt ha-584983-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3290211963/001/cp-test_ha-584983-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m04 "sudo cat /home/docker/cp-test.txt"
E0819 12:16:26.668046 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:16:26.674484 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:16:26.685941 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:16:26.707321 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:16:26.748687 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt ha-584983:/home/docker/cp-test_ha-584983-m04_ha-584983.txt
E0819 12:16:26.830913 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:16:26.992502 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983 "sudo cat /home/docker/cp-test_ha-584983-m04_ha-584983.txt"
E0819 12:16:27.314286 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt ha-584983-m02:/home/docker/cp-test_ha-584983-m04_ha-584983-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m04 "sudo cat /home/docker/cp-test.txt"
E0819 12:16:27.955792 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m02 "sudo cat /home/docker/cp-test_ha-584983-m04_ha-584983-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 cp ha-584983-m04:/home/docker/cp-test.txt ha-584983-m03:/home/docker/cp-test_ha-584983-m04_ha-584983-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 ssh -n ha-584983-m03 "sudo cat /home/docker/cp-test_ha-584983-m04_ha-584983-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.476537281s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-584983 node delete m03 -v=7 --alsologtostderr: (16.496158271s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (346.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-584983 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 12:29:02.189634 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:31:26.668116 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:32:49.734011 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:34:02.189184 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-584983 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m45.998667572s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (346.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-584983 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-584983 --control-plane -v=7 --alsologtostderr: (1m12.021416767s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-584983 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (86.29s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-862777 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0819 12:36:26.667962 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-862777 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m26.293990586s)
--- PASS: TestJSONOutput/start/Command (86.29s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-862777 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-862777 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.65s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-862777 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-862777 --output=json --user=testUser: (6.649472038s)
--- PASS: TestJSONOutput/stop/Command (6.65s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-524123 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-524123 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.962672ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"996a477e-509d-4343-8c63-259bfa3084c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-524123] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e3c50f6c-e531-4b4c-8963-2b05b79ce29a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19479"}}
	{"specversion":"1.0","id":"51dcd34b-6b3a-4da9-98ef-f2287b903d33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6a0c10cb-21e5-4e7a-bbf3-f1c47863551d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig"}}
	{"specversion":"1.0","id":"5ec7828a-f9f0-4961-ae04-477a3f680321","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube"}}
	{"specversion":"1.0","id":"85c6e5d8-7f53-4edc-b147-d6e3bb934e16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b82db8f6-316f-437f-9fcf-be6a725641c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"302668a5-4073-40c1-bfd1-d59dbdf22b53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-524123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-524123
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (86.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-458279 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-458279 --driver=kvm2  --container-runtime=crio: (40.369768995s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-460618 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-460618 --driver=kvm2  --container-runtime=crio: (42.785665308s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-458279
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-460618
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-460618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-460618
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-460618: (1.007761162s)
helpers_test.go:175: Cleaning up "first-458279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-458279
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-458279: (1.047071664s)
--- PASS: TestMinikubeProfile (86.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-443223 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0819 12:39:02.189468 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-443223 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.086494626s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-443223 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-443223 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-464375 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-464375 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.007767633s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-464375 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-464375 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-443223 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-464375 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-464375 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-464375
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-464375: (1.27503156s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.28s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-464375
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-464375: (23.279831331s)
--- PASS: TestMountStart/serial/RestartStopped (24.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-464375 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-464375 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-172739 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 12:41:26.668264 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:42:05.258621 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-172739 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m47.361751402s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-172739 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-172739 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-172739 -- rollout status deployment/busybox: (2.878804939s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-172739 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-172739 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-172739 -- exec busybox-7dff88458-czkl5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-172739 -- exec busybox-7dff88458-w6966 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-172739 -- exec busybox-7dff88458-czkl5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-172739 -- exec busybox-7dff88458-w6966 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-172739 -- exec busybox-7dff88458-czkl5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-172739 -- exec busybox-7dff88458-w6966 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-172739 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-172739 -- exec busybox-7dff88458-czkl5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-172739 -- exec busybox-7dff88458-czkl5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-172739 -- exec busybox-7dff88458-w6966 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-172739 -- exec busybox-7dff88458-w6966 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-172739 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-172739 -v 3 --alsologtostderr: (48.395461283s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.96s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-172739 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 cp testdata/cp-test.txt multinode-172739:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 cp multinode-172739:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3931286913/001/cp-test_multinode-172739.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 cp multinode-172739:/home/docker/cp-test.txt multinode-172739-m02:/home/docker/cp-test_multinode-172739_multinode-172739-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739-m02 "sudo cat /home/docker/cp-test_multinode-172739_multinode-172739-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 cp multinode-172739:/home/docker/cp-test.txt multinode-172739-m03:/home/docker/cp-test_multinode-172739_multinode-172739-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739-m03 "sudo cat /home/docker/cp-test_multinode-172739_multinode-172739-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 cp testdata/cp-test.txt multinode-172739-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 cp multinode-172739-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3931286913/001/cp-test_multinode-172739-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 cp multinode-172739-m02:/home/docker/cp-test.txt multinode-172739:/home/docker/cp-test_multinode-172739-m02_multinode-172739.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739 "sudo cat /home/docker/cp-test_multinode-172739-m02_multinode-172739.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 cp multinode-172739-m02:/home/docker/cp-test.txt multinode-172739-m03:/home/docker/cp-test_multinode-172739-m02_multinode-172739-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739-m03 "sudo cat /home/docker/cp-test_multinode-172739-m02_multinode-172739-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 cp testdata/cp-test.txt multinode-172739-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 cp multinode-172739-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3931286913/001/cp-test_multinode-172739-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 cp multinode-172739-m03:/home/docker/cp-test.txt multinode-172739:/home/docker/cp-test_multinode-172739-m03_multinode-172739.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739 "sudo cat /home/docker/cp-test_multinode-172739-m03_multinode-172739.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 cp multinode-172739-m03:/home/docker/cp-test.txt multinode-172739-m02:/home/docker/cp-test_multinode-172739-m03_multinode-172739-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 ssh -n multinode-172739-m02 "sudo cat /home/docker/cp-test_multinode-172739-m03_multinode-172739-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-172739 node stop m03: (1.500096043s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-172739 status: exit status 7 (409.321739ms)

                                                
                                                
-- stdout --
	multinode-172739
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-172739-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-172739-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-172739 status --alsologtostderr: exit status 7 (418.359916ms)

                                                
                                                
-- stdout --
	multinode-172739
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-172739-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-172739-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:43:10.113567 2121745 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:43:10.113684 2121745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:43:10.113695 2121745 out.go:358] Setting ErrFile to fd 2...
	I0819 12:43:10.113700 2121745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:43:10.113921 2121745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 12:43:10.114143 2121745 out.go:352] Setting JSON to false
	I0819 12:43:10.114177 2121745 mustload.go:65] Loading cluster: multinode-172739
	I0819 12:43:10.114282 2121745 notify.go:220] Checking for updates...
	I0819 12:43:10.114638 2121745 config.go:182] Loaded profile config "multinode-172739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:43:10.114656 2121745 status.go:255] checking status of multinode-172739 ...
	I0819 12:43:10.115097 2121745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:43:10.115172 2121745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:43:10.137657 2121745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38693
	I0819 12:43:10.138181 2121745 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:43:10.138841 2121745 main.go:141] libmachine: Using API Version  1
	I0819 12:43:10.138865 2121745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:43:10.139274 2121745 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:43:10.139492 2121745 main.go:141] libmachine: (multinode-172739) Calling .GetState
	I0819 12:43:10.141224 2121745 status.go:330] multinode-172739 host status = "Running" (err=<nil>)
	I0819 12:43:10.141245 2121745 host.go:66] Checking if "multinode-172739" exists ...
	I0819 12:43:10.141548 2121745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:43:10.141585 2121745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:43:10.157787 2121745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0819 12:43:10.158200 2121745 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:43:10.158644 2121745 main.go:141] libmachine: Using API Version  1
	I0819 12:43:10.158667 2121745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:43:10.158998 2121745 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:43:10.159178 2121745 main.go:141] libmachine: (multinode-172739) Calling .GetIP
	I0819 12:43:10.162019 2121745 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:43:10.162453 2121745 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:43:10.162480 2121745 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:43:10.162582 2121745 host.go:66] Checking if "multinode-172739" exists ...
	I0819 12:43:10.162878 2121745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:43:10.162940 2121745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:43:10.177944 2121745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46401
	I0819 12:43:10.178269 2121745 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:43:10.178673 2121745 main.go:141] libmachine: Using API Version  1
	I0819 12:43:10.178689 2121745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:43:10.178963 2121745 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:43:10.179166 2121745 main.go:141] libmachine: (multinode-172739) Calling .DriverName
	I0819 12:43:10.179333 2121745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:43:10.179354 2121745 main.go:141] libmachine: (multinode-172739) Calling .GetSSHHostname
	I0819 12:43:10.181822 2121745 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:43:10.182153 2121745 main.go:141] libmachine: (multinode-172739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:49:54", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:40:33 +0000 UTC Type:0 Mac:52:54:00:90:49:54 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-172739 Clientid:01:52:54:00:90:49:54}
	I0819 12:43:10.182178 2121745 main.go:141] libmachine: (multinode-172739) DBG | domain multinode-172739 has defined IP address 192.168.39.191 and MAC address 52:54:00:90:49:54 in network mk-multinode-172739
	I0819 12:43:10.182311 2121745 main.go:141] libmachine: (multinode-172739) Calling .GetSSHPort
	I0819 12:43:10.182471 2121745 main.go:141] libmachine: (multinode-172739) Calling .GetSSHKeyPath
	I0819 12:43:10.182610 2121745 main.go:141] libmachine: (multinode-172739) Calling .GetSSHUsername
	I0819 12:43:10.182877 2121745 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/multinode-172739/id_rsa Username:docker}
	I0819 12:43:10.259459 2121745 ssh_runner.go:195] Run: systemctl --version
	I0819 12:43:10.265426 2121745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:43:10.279768 2121745 kubeconfig.go:125] found "multinode-172739" server: "https://192.168.39.191:8443"
	I0819 12:43:10.279807 2121745 api_server.go:166] Checking apiserver status ...
	I0819 12:43:10.279839 2121745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:43:10.292956 2121745 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1106/cgroup
	W0819 12:43:10.301805 2121745 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1106/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:43:10.301850 2121745 ssh_runner.go:195] Run: ls
	I0819 12:43:10.306160 2121745 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I0819 12:43:10.310116 2121745 api_server.go:279] https://192.168.39.191:8443/healthz returned 200:
	ok
	I0819 12:43:10.310139 2121745 status.go:422] multinode-172739 apiserver status = Running (err=<nil>)
	I0819 12:43:10.310152 2121745 status.go:257] multinode-172739 status: &{Name:multinode-172739 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:43:10.310187 2121745 status.go:255] checking status of multinode-172739-m02 ...
	I0819 12:43:10.310587 2121745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:43:10.310631 2121745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:43:10.326763 2121745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42825
	I0819 12:43:10.327197 2121745 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:43:10.327684 2121745 main.go:141] libmachine: Using API Version  1
	I0819 12:43:10.327706 2121745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:43:10.327999 2121745 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:43:10.328230 2121745 main.go:141] libmachine: (multinode-172739-m02) Calling .GetState
	I0819 12:43:10.329845 2121745 status.go:330] multinode-172739-m02 host status = "Running" (err=<nil>)
	I0819 12:43:10.329861 2121745 host.go:66] Checking if "multinode-172739-m02" exists ...
	I0819 12:43:10.330142 2121745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:43:10.330177 2121745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:43:10.345872 2121745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45843
	I0819 12:43:10.346343 2121745 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:43:10.346761 2121745 main.go:141] libmachine: Using API Version  1
	I0819 12:43:10.346779 2121745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:43:10.347053 2121745 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:43:10.347277 2121745 main.go:141] libmachine: (multinode-172739-m02) Calling .GetIP
	I0819 12:43:10.350073 2121745 main.go:141] libmachine: (multinode-172739-m02) DBG | domain multinode-172739-m02 has defined MAC address 52:54:00:a6:03:3c in network mk-multinode-172739
	I0819 12:43:10.350602 2121745 main.go:141] libmachine: (multinode-172739-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:03:3c", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:41:32 +0000 UTC Type:0 Mac:52:54:00:a6:03:3c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-172739-m02 Clientid:01:52:54:00:a6:03:3c}
	I0819 12:43:10.350635 2121745 main.go:141] libmachine: (multinode-172739-m02) DBG | domain multinode-172739-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:a6:03:3c in network mk-multinode-172739
	I0819 12:43:10.350727 2121745 host.go:66] Checking if "multinode-172739-m02" exists ...
	I0819 12:43:10.351058 2121745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:43:10.351102 2121745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:43:10.366704 2121745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38951
	I0819 12:43:10.367202 2121745 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:43:10.367741 2121745 main.go:141] libmachine: Using API Version  1
	I0819 12:43:10.367761 2121745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:43:10.368058 2121745 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:43:10.368247 2121745 main.go:141] libmachine: (multinode-172739-m02) Calling .DriverName
	I0819 12:43:10.368439 2121745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:43:10.368459 2121745 main.go:141] libmachine: (multinode-172739-m02) Calling .GetSSHHostname
	I0819 12:43:10.371295 2121745 main.go:141] libmachine: (multinode-172739-m02) DBG | domain multinode-172739-m02 has defined MAC address 52:54:00:a6:03:3c in network mk-multinode-172739
	I0819 12:43:10.371809 2121745 main.go:141] libmachine: (multinode-172739-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:03:3c", ip: ""} in network mk-multinode-172739: {Iface:virbr1 ExpiryTime:2024-08-19 13:41:32 +0000 UTC Type:0 Mac:52:54:00:a6:03:3c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-172739-m02 Clientid:01:52:54:00:a6:03:3c}
	I0819 12:43:10.371840 2121745 main.go:141] libmachine: (multinode-172739-m02) DBG | domain multinode-172739-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:a6:03:3c in network mk-multinode-172739
	I0819 12:43:10.372000 2121745 main.go:141] libmachine: (multinode-172739-m02) Calling .GetSSHPort
	I0819 12:43:10.372201 2121745 main.go:141] libmachine: (multinode-172739-m02) Calling .GetSSHKeyPath
	I0819 12:43:10.372356 2121745 main.go:141] libmachine: (multinode-172739-m02) Calling .GetSSHUsername
	I0819 12:43:10.372496 2121745 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19479-2086935/.minikube/machines/multinode-172739-m02/id_rsa Username:docker}
	I0819 12:43:10.451307 2121745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:43:10.465615 2121745 status.go:257] multinode-172739-m02 status: &{Name:multinode-172739-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:43:10.465662 2121745 status.go:255] checking status of multinode-172739-m03 ...
	I0819 12:43:10.466024 2121745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:43:10.466105 2121745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:43:10.482977 2121745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41031
	I0819 12:43:10.483427 2121745 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:43:10.483918 2121745 main.go:141] libmachine: Using API Version  1
	I0819 12:43:10.483939 2121745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:43:10.484324 2121745 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:43:10.484557 2121745 main.go:141] libmachine: (multinode-172739-m03) Calling .GetState
	I0819 12:43:10.486316 2121745 status.go:330] multinode-172739-m03 host status = "Stopped" (err=<nil>)
	I0819 12:43:10.486333 2121745 status.go:343] host is not running, skipping remaining checks
	I0819 12:43:10.486342 2121745 status.go:257] multinode-172739-m03 status: &{Name:multinode-172739-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-172739 node start m03 -v=7 --alsologtostderr: (36.881919514s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-172739 node delete m03: (1.615711502s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (184.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-172739 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 12:54:02.189434 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-172739 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m3.490975433s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-172739 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (184.01s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-172739
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-172739-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-172739-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (64.040846ms)

                                                
                                                
-- stdout --
	* [multinode-172739-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-172739-m02' is duplicated with machine name 'multinode-172739-m02' in profile 'multinode-172739'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-172739-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-172739-m03 --driver=kvm2  --container-runtime=crio: (43.63929818s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-172739
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-172739: exit status 80 (226.722555ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-172739 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-172739-m03 already exists in multinode-172739-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_8.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-172739-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-172739-m03: (1.044076213s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.02s)

                                                
                                    
x
+
TestScheduledStopUnix (112.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-958307 --memory=2048 --driver=kvm2  --container-runtime=crio
E0819 12:58:45.262504 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-958307 --memory=2048 --driver=kvm2  --container-runtime=crio: (40.455888236s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-958307 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-958307 -n scheduled-stop-958307
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-958307 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-958307 --cancel-scheduled
E0819 12:59:02.188759 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-958307 -n scheduled-stop-958307
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-958307
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-958307 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-958307
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-958307: exit status 7 (65.448853ms)

                                                
                                                
-- stdout --
	scheduled-stop-958307
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-958307 -n scheduled-stop-958307
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-958307 -n scheduled-stop-958307: exit status 7 (65.365411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-958307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-958307
--- PASS: TestScheduledStopUnix (112.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (211.11s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2159479355 start -p running-upgrade-038031 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2159479355 start -p running-upgrade-038031 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m53.68946531s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-038031 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-038031 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m35.032358483s)
helpers_test.go:175: Cleaning up "running-upgrade-038031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-038031
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-038031: (1.899827313s)
--- PASS: TestRunningBinaryUpgrade (211.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-237597 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-237597 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (81.139908ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-237597] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (95.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-237597 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-237597 --driver=kvm2  --container-runtime=crio: (1m35.099068993s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-237597 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (95.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-649777 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-649777 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (104.140806ms)

                                                
                                                
-- stdout --
	* [false-649777] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:00:13.120520 2129346 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:00:13.120661 2129346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:00:13.120674 2129346 out.go:358] Setting ErrFile to fd 2...
	I0819 13:00:13.120681 2129346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:00:13.120901 2129346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-2086935/.minikube/bin
	I0819 13:00:13.121478 2129346 out.go:352] Setting JSON to false
	I0819 13:00:13.122427 2129346 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":164563,"bootTime":1723907850,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 13:00:13.122489 2129346 start.go:139] virtualization: kvm guest
	I0819 13:00:13.124446 2129346 out.go:177] * [false-649777] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 13:00:13.125682 2129346 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:00:13.125728 2129346 notify.go:220] Checking for updates...
	I0819 13:00:13.127739 2129346 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:00:13.128717 2129346 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-2086935/kubeconfig
	I0819 13:00:13.129828 2129346 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-2086935/.minikube
	I0819 13:00:13.130802 2129346 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 13:00:13.131715 2129346 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:00:13.133193 2129346 config.go:182] Loaded profile config "NoKubernetes-237597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:00:13.133315 2129346 config.go:182] Loaded profile config "force-systemd-env-344812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:00:13.133403 2129346 config.go:182] Loaded profile config "offline-crio-209114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 13:00:13.133501 2129346 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:00:13.172502 2129346 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 13:00:13.173600 2129346 start.go:297] selected driver: kvm2
	I0819 13:00:13.173616 2129346 start.go:901] validating driver "kvm2" against <nil>
	I0819 13:00:13.173629 2129346 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:00:13.175592 2129346 out.go:201] 
	W0819 13:00:13.176712 2129346 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0819 13:00:13.177785 2129346 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-649777 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-649777

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-649777

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-649777

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-649777

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-649777

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-649777

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-649777

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-649777

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-649777

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-649777

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-649777

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-649777" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-649777" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-649777

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649777"

                                                
                                                
----------------------- debugLogs end: false-649777 [took: 2.698073774s] --------------------------------
helpers_test.go:175: Cleaning up "false-649777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-649777
--- PASS: TestNetworkPlugins/group/false (2.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-237597 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-237597 --no-kubernetes --driver=kvm2  --container-runtime=crio: (37.237015864s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-237597 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-237597 status -o json: exit status 2 (251.818754ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-237597","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-237597
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-237597: (1.099985658s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (51.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-237597 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-237597 --no-kubernetes --driver=kvm2  --container-runtime=crio: (51.954261829s)
--- PASS: TestNoKubernetes/serial/Start (51.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-237597 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-237597 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.505704ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (17.615932425s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.024762435s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-237597
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-237597: (1.302671742s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-237597 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-237597 --driver=kvm2  --container-runtime=crio: (21.818173215s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-237597 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-237597 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.429498ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (120.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1967391825 start -p stopped-upgrade-876129 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1967391825 start -p stopped-upgrade-876129 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m14.407672553s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1967391825 -p stopped-upgrade-876129 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1967391825 -p stopped-upgrade-876129 stop: (2.146538656s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-876129 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-876129 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.035774474s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (120.59s)

                                                
                                    
x
+
TestPause/serial/Start (103.5s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-698949 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0819 13:06:09.737507 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-698949 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m43.500065867s)
--- PASS: TestPause/serial/Start (103.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-876129
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-649777 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-649777 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m28.121060856s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-649777 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0819 13:06:26.668687 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-649777 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m27.181145415s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-649777 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-649777 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xzmst" [3b073d03-d661-4e2b-a8e1-fa3136de0d19] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xzmst" [3b073d03-d661-4e2b-a8e1-fa3136de0d19] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00501217s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mp5gj" [9a37bf09-4b1a-4710-80f2-180c558eb8af] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003985767s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-649777 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-649777 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-649777 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-649777 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-649777 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vpll6" [8c0fbba5-9772-48cb-b51d-cf53b3faf352] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vpll6" [8c0fbba5-9772-48cb-b51d-cf53b3faf352] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003852318s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-649777 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-649777 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-649777 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-649777 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-649777 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m17.571899657s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (79.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-649777 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-649777 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m19.504236907s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (79.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-649777 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-649777 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m31.934766932s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (124.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-649777 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0819 13:09:02.189670 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-649777 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m4.391172754s)
--- PASS: TestNetworkPlugins/group/flannel/Start (124.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qqq2j" [e3503451-4e08-458e-b5e8-74e23a2e888e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006145323s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-649777 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-649777 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context calico-649777 replace --force -f testdata/netcat-deployment.yaml: (1.360555793s)
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fbhj9" [ae584c00-b068-427b-8c60-5a19d004f091] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fbhj9" [ae584c00-b068-427b-8c60-5a19d004f091] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004344899s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-649777 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-649777 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kkh82" [062b9258-315e-42c3-be8a-865bdee7e6a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kkh82" [062b9258-315e-42c3-be8a-865bdee7e6a1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004062579s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-649777 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-649777 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-649777 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-649777 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-649777 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-649777 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-649777 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-649777 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sghw4" [89622f01-1728-4087-b1ad-75bbab5aba9d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sghw4" [89622f01-1728-4087-b1ad-75bbab5aba9d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.005217596s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-649777 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-649777 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m0.614925306s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-649777 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-649777 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-649777 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (109.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-441627 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-441627 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m49.185573683s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (109.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-hqp77" [ae464229-3807-4308-ba06-f96735d1954c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004827524s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-649777 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-649777 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hplpq" [1b243a7a-9670-48d3-9fc8-ea8d3f7e3bfc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hplpq" [1b243a7a-9670-48d3-9fc8-ea8d3f7e3bfc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004500013s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-649777 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-649777 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-649777 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-649777 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-649777 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context bridge-649777 replace --force -f testdata/netcat-deployment.yaml: (1.344615872s)
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9nn7h" [10f211f2-cc8d-4554-872b-c0dd446162bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9nn7h" [10f211f2-cc8d-4554-872b-c0dd446162bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004244631s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (103.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-038572 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-038572 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m43.295657941s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (103.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (21.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-649777 exec deployment/netcat -- nslookup kubernetes.default
E0819 13:11:26.668322 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-649777 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.149258399s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-649777 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-649777 exec deployment/netcat -- nslookup kubernetes.default: (5.173708361s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (21.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-649777 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-649777 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (51.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-093052 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-093052 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (51.990202211s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (51.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-441627 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [579fa6de-fee5-4b5c-89c8-200b06356681] Pending
helpers_test.go:344: "busybox" [579fa6de-fee5-4b5c-89c8-200b06356681] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [579fa6de-fee5-4b5c-89c8-200b06356681] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005012456s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-441627 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-441627 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-441627 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.072281545s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-441627 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-093052 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-093052 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.007163934s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-093052 --alsologtostderr -v=3
E0819 13:12:51.878110 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:51.884477 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:51.896233 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:51.917583 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:51.959043 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:52.040530 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:52.202724 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:52.524495 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:53.165965 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:54.280144 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:54.447718 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:12:57.010008 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-093052 --alsologtostderr -v=3: (7.352300015s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-093052 -n newest-cni-093052
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-093052 -n newest-cni-093052: exit status 7 (64.014186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-093052 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-093052 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-093052 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (36.076984237s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-093052 -n newest-cni-093052
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-038572 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [984019b3-bc63-4430-a519-0ed7bd8451e0] Pending
helpers_test.go:344: "busybox" [984019b3-bc63-4430-a519-0ed7bd8451e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [984019b3-bc63-4430-a519-0ed7bd8451e0] Running
E0819 13:13:02.131575 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:13:04.522025 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.007246088s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-038572 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-038572 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-038572 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-093052 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-093052 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-093052 -n newest-cni-093052
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-093052 -n newest-cni-093052: exit status 2 (237.806294ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-093052 -n newest-cni-093052
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-093052 -n newest-cni-093052: exit status 2 (231.771889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-093052 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-093052 -n newest-cni-093052
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-093052 -n newest-cni-093052
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-823470 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 13:14:02.188941 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:05.966020 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:13.817615 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:27.644523 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:27.650958 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:27.662378 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:27.683891 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:27.725435 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:27.806915 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:27.968495 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:28.290064 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:28.931747 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:30.213021 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-823470 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m21.558542494s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-823470 create -f testdata/busybox.yaml
E0819 13:15:00.167843 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4f23ac0a-5280-4108-a144-9fd2460f27ec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0819 13:15:01.332266 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [4f23ac0a-5280-4108-a144-9fd2460f27ec] Running
E0819 13:15:04.011488 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005624428s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-823470 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (683.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-441627 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 13:15:01.449293 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-441627 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (11m23.656767274s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-441627 -n no-preload-441627
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (683.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-823470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0819 13:15:08.621215 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:09.133815 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-823470 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (536.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-038572 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 13:15:39.335837 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:39.342304 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:39.353710 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:39.375215 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:39.416931 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:39.498473 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:39.660120 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:39.857783 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:39.982331 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:40.624422 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:41.905984 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:44.468288 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:49.582730 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:49.590231 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:15:59.832555 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:02.776153 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:08.804924 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:08.811308 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:08.822696 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:08.844112 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:08.885874 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:08.967383 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:09.128933 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:09.451031 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:10.092710 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:11.374825 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:13.936609 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:19.058055 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:20.314234 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:20.819252 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:26.668711 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:16:29.299935 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-038572 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (8m56.576214127s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038572 -n default-k8s-diff-port-038572
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (536.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-620467 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-620467 --alsologtostderr -v=3: (5.30715507s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-620467 -n old-k8s-version-620467: exit status 7 (62.905377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-620467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (487.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-823470 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 13:17:42.741421 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:17:44.026902 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:17:51.878651 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:18:11.730266 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:18:19.581650 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:18:23.198192 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:18:52.665945 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:19:02.189143 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:19:27.644699 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:19:40.836929 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:19:55.346655 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:19:58.879696 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:20:08.539395 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/custom-flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:20:26.583621 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/enable-default-cni-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:20:39.336049 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:21:07.039528 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/flannel-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:21:08.804591 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:21:26.667662 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:21:36.508303 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/bridge-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:22:44.026009 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/auto-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:22:49.739364 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/functional-102083/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:22:51.878629 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/kindnet-649777/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:24:02.189213 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/addons-442145/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:24:27.644581 2094193 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-2086935/.minikube/profiles/calico-649777/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-823470 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (8m7.049246176s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-823470 -n embed-certs-823470
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (487.30s)

                                                
                                    

Test skip (37/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
133 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
254 TestNetworkPlugins/group/kubenet 2.93
263 TestNetworkPlugins/group/cilium 3.23
278 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-649777 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-649777

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-649777

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-649777

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-649777

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-649777

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-649777

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-649777

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-649777

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-649777

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-649777

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-649777

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-649777" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-649777" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-649777

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649777"

                                                
                                                
----------------------- debugLogs end: kubenet-649777 [took: 2.784583567s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-649777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-649777
--- SKIP: TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-649777 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-649777" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-649777" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-649777" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-649777

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-649777" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649777"

                                                
                                                
----------------------- debugLogs end: cilium-649777 [took: 3.078314576s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-649777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-649777
--- SKIP: TestNetworkPlugins/group/cilium (3.23s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-554476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-554476
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard